name
stringlengths 5
6
| title
stringlengths 8
144
| abstract
stringlengths 0
2.68k
| fulltext
stringlengths 1.78k
95k
| keywords
stringlengths 22
532
|
---|---|---|---|---|
631169 | An Accurate Worst Case Timing Analysis for RISC Processors. | An accurate and safe estimation of a tasks worst case execution time (WCET) is crucial for reasoning about the timing properties of real-time systems. In RISC processors, the execution time of a program construct (e.g., a statement) is affected by various factors such as cache hits/misses and pipeline hazards, and these factors impose serious problems in analyzing the WCETs of tasks. To analyze the timing effects of RISCs pipelined execution and cache memory, we propose extensions to the original timing schema where the timing information associated with each program construct is a simple time-bound. In our approach, associated with each program construct is worst case timing abstraction, (WCTA), which contains detailed timing information of every execution path that might be the worst case execution path of the program construct. This extension leads to a revised timing schema that is similar to the original timing schema except that concatenation and pruning operations on WCTAs are newly defined to replace the add and max operations on time-bounds in the original timingschema. Our revised timing schema accurately accounts for the timing effects of pipelined execution and cache memory not only within but also across program constructs. This paper also reports on preliminary results of WCET analysis for a RISC processor. Our results show that tight WCET bounds (within a maximum of about 30% overestimation) can be obtained by using the revised timing schema approach. | INTRODUCTION
In real-time computing systems, tasks have timing requirements (i.e., deadlines) that must be met
for correct operation. Thus, it is of utmost importance to guarantee that tasks finish before their
deadlines. Various scheduling techniques, both static and dynamic, have been proposed to ensure this
guarantee. These scheduling algorithms generally require that the WCET (Worst Case Execution
Time) of each task in the system be known a priori. Therefore, it is not surprising that considerable
research has focused on the estimation of the WCETs of tasks.
In a non-pipelined processor without cache memory, it is relatively easy to obtain a tight bound
on the WCET of a sequence of instructions. One simply has to sum up their individual execution
times that are usually given in a table. The WCET of a program can then be calculated by
traversing the program's syntax tree bottom-up and applying formulas for calculating the WCETs
of various language constructs. However, for RISC processors such a simple analysis may not
be appropriate because of their pipelined execution and cache memory. In RISC processors, an
instruction's execution time varies widely depending on many factors such as pipeline stalls due to
hazards and cache hits/misses. One can still obtain a safe WCET bound by assuming the worst
case execution scenario (e.g., each instruction suffers from every kind of hazard and every memory
access results in a cache miss). However, such a pessimistic approach would yield an extremely loose
WCET bound resulting in severe under-utilization of machine resources.
Our goal is to predict tight and safe WCET bounds of tasks for RISC processors. Achieving
this goal would permit RISC processors to be widely used in real-time systems. Our approach
is based on an extension of the timing schema [1]. The timing schema is a set of formulas for
computing execution time bounds of language constructs. In the original timing schema, the timing
information associated with each program construct is a simple time-bound. This choice of timing
information facilitates a simple and accurate timing analysis for processors with fixed execution
times. However, for RISC processors, such timing information is not sufficient to accurately account
for timing variations resulting from pipelined execution and cache memory.
This paper proposes extensions to the original timing schema to rectify the above problem. We
associate with each program construct what we call a (Worst Case Timing Abstraction). The
of a program construct contains timing information of every execution path that might be the
worst case execution path of the program construct. Each timing information includes information
about the factors that may affect the timing of the succeeding program construct. It also includes
the information that is needed to refine the execution time of the program construct when the
timing information of the preceding program construct becomes available at a later stage of WCET
analysis. This extension leads to a revised timing schema that accurately accounts for the timing
variation which results from the history sensitive nature of pipelined execution and cache memory.
We assume that each task is sequential and that some form of cache partitioning [2, 3] is used
to prevent tasks from affecting each other's timing behavior. Without these assumptions, it would
not be possible to eliminate the unpredictability due to task interaction. For example, consider a
real-time system in which a preemptive scheduling policy is used and the cache is not partitioned.
In such a system, a burst of cache misses usually occurs when a previously preempted task resumes
execution. Increase of the task execution time resulting from such a burst of cache misses cannot
be bounded by analyzing each task in isolation.
This paper is organized as follows. In Section II, we survey the related work. Section III focuses
on the problems associated with accurately estimating the WCETs of tasks in pipelined processors.
We then present our method for solving these problems. In Section IV, we describe an accurate
timing analysis technique for instruction cache memory and explain how this technique can be
combined with the pipeline timing analysis technique given in Section III. Section V identifies the
differences between the WCET analysis of instruction caches and that of data caches, and explains
how we address the issues resulting from these differences. In Section VI, we report on preliminary
results of WCET analyses for a RISC processor. Finally, the conclusion is given in Section VII.
II RELATED WORK
A timing prediction method for real-time systems should be able to give safe and accurate WCET
bounds of tasks. Measurement-based and analytical techniques have been used to obtain such
bounds. Measurement-based techniques are, in many cases, inadequate to produce a timing estimation
for real-time systems since their predictions are usually not guaranteed, or enormous cost
is needed. Due to these limitations, analytical approaches are becoming more popular [4, 5, 6,
7, 8, 9, 10, 11, 12, 13, 14, 15, 16]. Many of these analytical studies, however, consider a simple
machine model, thus largely ignoring the timing effects of pipelined execution and cache memory
[8, 12, 13, 15].
A. Timing Analysis of Pipelined Execution
The timing effects of pipelined execution have been recently studied by Harmon, Baker, and Whalley
[6], Harcourt, Mauney, and Cook [5], Narasimhan and Nilsen [11], and Choi, Lee, and Kang
[4]. In these studies, the execution time of a sequence of instructions is estimated by modeling a
pipelined processor as a set of resources and representing each instruction as a process that acquires
and consumes a subset of the resources in time. In order to mechanize the process of calculating
the execution time, they use various techniques: pattern matching [6], SCCS (Synchronous Calculus
of Communicating Systems) [5], retargetable pipeline simulation [11], and ACSR (Algebra of Communicating
Shared Resources) [4]. Although these approaches have the advantage of being formal
and machine independent, their applications are currently limited to calculating the execution time
of a sequence of instructions or a given sequence of basic blocks 1 . Therefore, they rely on ad hoc
methods to calculate the WCETs of programs.
The pipeline timing analysis technique by Zhang, Burns and Nicholson [16] can mechanically
calculate the WCETs of programs for a pipelined processor. Their analysis technique is based
on a mathematical model of the pipelined Intel 80C188 processor. This model takes into account
the overlap between instruction execution and opcode prefetching in 80C188. In their approach,
the WCET of each basic block in a program is individually calculated based on the mathematical
model. The WCET of the program is then calculated using the WCETs of the constituent basic
blocks and timing formulas for calculating the WCETs of various language constructs.
Although this approach represents significant progress over the previous schemes that did not
consider the timing effects of pipelined execution, it still suffers from two inefficiencies. First, the
pipelining effects across basic blocks are not accurately accounted for. In general, due to data
dependencies and resource conflicts within the execution pipeline, a basic block's execution time
will differ depending on what the surrounding basic blocks are. However, since their approach
requires that the WCET of each basic block be independently calculated, they make the worst
case assumption on the preceding basic block (e.g., the last instruction of every basic block that
can precede the basic block being analyzed has data memory access, which prevents the opcode
prefetching of the first instruction of the basic block being analyzed). This assumption is reasonable
for their target processor since its pipeline has only two stages. However, completely ignoring
pipelining effects across basic blocks may yield a very loose WCET estimation for more deeply
pipelined processors. Second, although their mathematical model is very effective for the Intel
80C188 processor, the model is not general enough to be applicable to other pipelined processors.
This is due to the many machine specific assumptions made in their model that are difficult to
generalize.
1 A basic block is a sequence of consecutive instructions in which flow of control enters at the beginning and leaves
at the end without halt or possibility of branching except at the end [17].
B. Timing Analysis of Cache Memory
Cache memories have been widely used to bridge the speed gap between processor and main memory.
However, designers of hard real-time systems are wary of using caches in their systems since the
performance of caches is considered to be unpredictable. This concern stems from the following
two sources: inter-task interference and intra-task interference. Inter-task interference is caused by
task preemption. When a task is preempted, most of its cache blocks 2 are displaced by the newly
scheduled task and the tasks scheduled thereafter. When the preempted task resumes execution, it
makes references to the previously displaced blocks and experiences a burst of cache misses. This
type of cache miss cannot be avoided in real-time systems with preemptive scheduling of tasks. The
result is a wide variation in task execution time. This execution time variation can be eliminated
by partitioning the cache and dedicating one or more partitions to each real-time task [2, 3]. This
cache partitioning approach eliminates the inter-task interference caused by task preemption.
Intra-task interference in caches occurs when more than one memory block of the same task
compete with each other for the same cache block. This interference results in two types of cache
miss: capacity misses and conflict misses [19]. Capacity misses are due to finite cache size. Conflict
misses, on the other hand, are caused by a limited set associativity. These types of cache miss
cannot be avoided if the cache has a limited size and/or set associativity.
Among the analytical WCET prediction schemes that we are aware of, only four schemes take
into account the timing variation resulting from intra-task cache interference (three for instruction
caches [10, 9, 7] and one for data caches [14]). The static cache simulation approach which statically
predicts hits or misses of instruction references is due to Arnold, Mueller, Whalley and Harmon [10].
In this approach, instructions are classified into the following four categories based on a data flow
analysis:
ffl always-hit: The instruction is always in the cache.
ffl always-miss: The instruction is never in the cache.
ffl first-hit: The first reference to the instruction hits in the cache. However, all the subsequent
references miss in the cache.
A block is the minimum unit of information that can be either present or not present in the cache-main memory
hierarchy [18].
.
if (cond)
elsek
Fig. 1. Sample C program fragment
ffl first-miss: The first reference to the instruction misses in the cache. However, all the subsequent
references hit in the cache.
This approach is simple but has a number of limitations. One limitation is that the analysis is too
conservative. As an example, consider the program fragment given in Fig. 1. Assume that both of
the instruction memory blocks corresponding to S i (i.e., are mapped to the same
cache block and that no other instruction memory block is mapped to that cache block. Further
assume that the execution time of S i is much longer than that of S j . Under these assumptions, the
worst case execution scenario of this program fragment is to repeatedly execute S i within the loop.
In this worst case scenario, only the first access to b i will miss in the cache and all the subsequent
accesses within the loop will hit in the cache. However, by being classified as always-miss, all the
references to b i are treated as cache misses in this approach, which leads to a loose estimation of the
loop's WCET. Another limitation of this approach is that the approach does not address the issues
regarding pipelined execution and the use of data caches, which are commonly found in most RISC
processors.
In [9], Niehaus et al. discuss the potential benefits of identifying instruction references corresponding
to always-hit and first-miss in the static cache simulation approach. However, as stated
in [10], their analysis is rather abstract and no general method for analyzing the worst case timing
behavior of programs is given.
In [7], Liu and Lee propose techniques to derive WCET bounds of a cached program based on
a transition diagram of cached states. Their WCET analysis uses an exhaustive search technique
through the state transition diagram which has an exponential time complexity. To reduce the time
complexity of this approach, they propose a number of approximate analysis methods each of which
makes a different trade-off between the analysis complexity and the tightness of the resultant WCET
bounds. Although the paper mentions that the methods are equally applicable to the data cache, the
main focus is on the instruction cache since the issues pertinent to the data cache such as handling
of write references and references with unknown addresses (cf. Section V) are not considered. Also,
it is not clear how one can incorporate the analysis of pipelined execution into the framework.
Rawat performs a static analysis for data caches [14]. His approach is similar to the graph
coloring approach to register allocation [20]. The analysis proceeds as follows. First, live ranges of
variables and those of memory blocks are computed 3 . Second, an interference graph is constructed
for each cache block. An edge in the interference graph connects two memory blocks if they are
mapped to the same cache block and their live ranges overlap with each other. Third, live ranges
of memory blocks are split until they do not overlap with each other. If a live range of a memory
block does not overlap with that of any other memory block, the memory block never gets replaced
from the cache during execution within the live range. Therefore, the number of cache misses due
to a memory block can be calculated from the frequency counts of its live ranges (i.e., how many
times the program control flows into the live ranges). Finally, the total number of data cache misses
is estimated by summing up the frequencies of all the live ranges of all the memory blocks used in
the program.
Although this analysis method is a step forward from the analysis methods in which every data
reference is treated as a cache miss, it still suffers from the following three limitations. First, the
analysis does not allow function calls and global variables, which severely limits its applicability.
Second, the analysis leads to an overestimation of data cache misses resulting from the assumption
that every possible execution path can be the worst case execution path. This limitation is similar
to the first limitation of the static cache simulation approach. The third limitation of this approach
is that it does not address the issues of locating the worst case execution path and of calculating
the WCET, again limiting its applicability.
3 A live range of a variable (memory block) is a set of basic blocks during whose execution the variable (memory
potentially resides in the cache [14].
RD
IF
ALU
MD
mult $25, $24
nop
lw $24, 16($22)
nop
lw $25, 16($23)
Fig. 2. Sample MIPS assembly code and the corresponding reservation table
III PIPELINING EFFECTS
In pipelined processors, various execution steps of instructions are simultaneously overlapped. Due
to this overlapped execution, an instruction's execution time will differ depending on what the
surrounding instructions are. However, this timing variation could not be accurately accounted for
in the original timing schema since the timing information associated with each program construct
is a simple time-bound. In this section, we extend the timing schema to rectify this problem.
In our extended timing schema, the timing information of each program construct is a set of
reservation tables rather than a time-bound. The reservation table was originally proposed to describe
and analyze the activities within a pipeline [21]. In a reservation table, the vertical dimension
represents the stages in the pipeline and the horizontal dimension represents time. Fig. 2 shows a
sample basic block in the MIPS assembly language [22] and the corresponding reservation table. In
the figure, each x in the reservation table specifies the use of the corresponding stage for the indicated
time slot. In the proposed approach, we analyze the timing interactions among instructions
within a basic block by building its reservation table. In the reservation table, not only the conflicts
in the use of pipeline stages but also data dependencies among instructions are considered.
A program construct such as an if statement may have more than one execution path. Moreover,
in pipelined processors, it is not always possible to determine which one of the execution paths is
the worst case execution path by analyzing the program construct alone. As an example, suppose
that an if statement has two execution paths corresponding to the two reservation tables shown in
Fig. 3. The worst case execution path here depends on the instructions in the preceding program
constructs. For example, if one of the instructions near the end of the preceding program construct
uses the MD stage, the execution path corresponding to R 1 will become the worst case execution
path. On the other hand, if there is an instruction using the DIV stage instead, the execution path
corresponding to R 2 will become the worst case execution path. Therefore, we should keep both
ALU
RD
IF
MD
Fig. 3. Two reservation tables with equal
struct pipeline timing information f
time t max;
reservation table head[ffi head ];
reservation table tail[ffi tail ];
d
d
MD
RD
head
tail
Fig. 4. Reservation table data structure
reservation tables until the timing information of the preceding program constructs is known.
Fig. 4 shows the data structure for a reservation table used in our approach in both textual
and graphical form. In the data structure, t max is the worst case execution time of the reservation
table, which is determined by the number of columns in the reservation table. In implementation,
not all the columns in the reservation table are maintained. Instead, we maintain only a first few
(i.e., columns and a last few (i.e., ffi tail ) columns. The larger ffi head and ffi tail are, the tighter
the resulting WCET estimation is since more execution overlap between program constructs can be
modeled as we will see later. corresponds to the case where the full reservation
table is maintained.
As explained earlier, we associate with each program construct a set of reservation tables where
each reservation table contains the timing information of an execution path that might be the worst
case execution path of the program construct. We call this set the WCTA (Worst Case Timing
Abstraction) of the program construct. This WCTA corresponds to the time-bound in the original
timing schema and each element in the WCTA is denoted by (t
With this framework, the timing schema can be extended so that the timing interactions across
ALU
RD
MD
IF
ALU
RD
IF
MD
ALU
RD
IF
MD
Fig. 5. Example application of \Phi operation
program constructs can be accurately accounted for. In the extended timing schema, the timing
formula of a sequential statement S:
are the WCTAs of S, S 1 and S 2 , respectively. The operation
between two WCTAs is defined as
are reservation tables and the \Phi operation concatenates two reservation tables
resulting in another reservation table. This concatenation operation models the pipelined execution
of a sequence of instructions followed by another sequence of instructions. The semantics of this
operation for a target processor can be deduced from its data book. Fig. 5 shows an application of
the \Phi operation. From the figure, one can note that as more columns are maintained in head and
tail, more overlap between adjacent program constructs can be modeled and, therefore, a tighter
WCET estimation can be obtained.
The above timing formula for S: effectively enumerates all the possible candidates for
the worst case execution path of S 1 However, during each instantiation of this timing formula,
a check is made to see whether the resulting WCTA can be pruned. An element in a WCTA can be
removed from the WCTA if we can guarantee that that element's WCET in the worst case scenario is
shorter than the best case scenario WCET of some other element in the same WCTA. This pruning
condition can be more formally specified as follows:
A reservation table w in a WCTA W can be pruned without affecting the prediction for
the worst case timing behavior of W if
In this condition, w:t max is w's execution time when we assume the worst case scenario for w (i.e.,
when no part of w's head and tail is overlapped with the surrounding program constructs). On
the other hand, w tail is the execution time of w 0 when we assume the best case
scenario for w 0 (i.e., when its head is completely overlapped with the tail of the preceding program
construct and its tail is completely overlapped with the head of the succeeding program construct).
The timing formula of an if statement S: if (exp) then S 1 else S 2 is given by
are the WCTAs of S, exp, S 1 and S 2 , respectively and
S is the set union operation. As in the previous timing formula, pruning is performed during each
instantiation of this timing formula.
Function calls are processed like sequential statements. In our approach, functions are processed
in a reverse topological order in the call graph 4 since the WCTA of a function should be calculated
before the functions that call it are processed.
4 A call graph contains the information on how functions call each other [23]. For example, if f calls g, then an arc
connects f 's vertex to that of g in their call graph.
Finally, the timing formula of a loop statement S: while (exp) S 1 is given by
where N is a loop bound that is provided by some external means (e.g., from user input). This timing
formula effectively enumerates all the possible candidates for the worst case execution scenario of
the loop statement. This approach is exact but is computationally intractable for a large N . In the
following, we provide approximate methods for loop timing analysis.
Approximate Loop Timing Analysis The problem of finding the worst case execution scenario
for a loop statement with loop bound N can be formulated as a problem to find the longest
weighted path (not necessarily simple) containing exactly N arcs in a weighted directed graph. Thus,
the approximate loop timing analysis method is explained using a graph theoretic formulation.
be a weighted directed graph where is the set of the
execution paths in the loop body that might be the worst case execution path (i.e., those in
associated with each arc is weight w ij which is the execution time of path
its execution is immediately preceded by path p i . Define D ';i;j as the weight of the longest
path (not necessarily simple) from p i to p j in G containing exactly ' arcs. With this definition, the
t max of the loop's worst case execution scenario that starts with path p i and ends with path p j is
given by p i :t max +DN \Gamma1;i;j where p i :t max is t max of path p i . The WCTA of this worst case execution
scenario inherits p i 's head since it starts with p i . Likewise, it inherits p j 's tail. From these, the
of the loop's worst case execution scenario that starts with path p i and ends with path p j ,
which is denoted by wcta(wp N
ij ), is given by (p i :t Since the actual
worst case execution scenario of the loop depends on the program constructs surrounding the loop
statement, we do not know with which paths the actual worst case execution scenario starts and
ends when we analyze the loop statement. Therefore, one has to consider all the possibilities. The
corresponding WCTA of the loop statement is given by (
(exp). The
only remaining problem is to determine DN \Gamma1;i;j . We determine the value by solving the following
equations.
pk 2P
Computation of D ';i;j for all using dynamic programming takes O(N \ThetajP
time. For a large N , this time complexity is still unacceptable. In the following, we describe a faster
technique that gives a very tight upper bound for D ';i;j . This technique is based on the calculation
of the maximum cycle mean of G.
The maximum cycle mean of a weighted directed graph G is
ranges over all directed cycles in G and m(c) is the mean weight of c. The maximum cycle mean
can be calculated in O(jP j \Theta jAj) time, which is independent of N , using an algorithm due to
Karp [24]. Let m be the maximum cycle mean of G, then D ';i;j can safely be approximated as
We prove this in the following proposition.
Proposition 1 If D ';i;j is the maximum weight of a path (not necessarily simple) from p i to p j
containing exactly ' arcs in a complete weighted directed graph and m is the maximum
cycle mean of G, then D ';i;j - D 0
Proof . Assume for the sake of contradiction that D ';i;j is greater than ' \Theta m+ (m \Gamma w ji ). Then we
can construct a cycle containing by adding the arc from p j to p i to the path from which
D ';i;j is calculated. The arc should exist since G is a complete graph. The resulting cycle has a
mean weight greater than m since D ';i;j +w ji
m. This implies an existence of
a cycle in G whose mean weight is greater than m. This contradicts our hypothesis that m is the
maximum cycle mean of G and thus D ';i;j - ' \Theta m
Moreover, it has been shown that D 0 ';i;j \Gamma D ';i;j , which indicates the looseness of the approx-
imation, is bounded above by 3 \Theta (m \Gamma wmin ) where wmin is the minimum weight of an arc in A
[25]. We can expect this bound to be very tight since m ' wmin . (Remember that P consists of the
paths in W (exp)
that cannot be pruned by each other.)
Interference Up to now, we have assumed that tasks execute without preemption. However,
bb
contents
cache
cache
cache
Fig. 6. Sample instruction block references from a program construct
in real systems, tasks may be preempted for various reasons: preemptive scheduling, external inter-
rupts, resource contention, and so on. For a task, these preemptions are interference that breaks
in the task's execution flow. The problem regarding interference is that of adjusting the prediction
made under the assumption of no interference such that the prediction is applicable in an environment
with interference. Fortunately, the additional per-preemption delay introduced by pipelined
execution is bounded by the maximum number of cycles for which an instruction remains in the
pipeline (in MIPS R3000 it is 36 cycles in the case of the div instruction). Once this information is
available, adjusting the predictions to reflect interference can be done using the techniques explained
in [26].
IV INSTRUCTION CACHING EFFECTS
For a processor with an instruction cache, the execution time of a program construct will differ
depending on which execution path was taken prior to the program construct. This is a result of
the history sensitive nature of the instruction cache. As an example, consider a program construct
that accesses instruction blocks 5 (b 2 , b 3 , b 2 , b 4 ) in the sequence given (cf. Fig. 6). Assume that
the instruction cache has only two blocks and is direct-mapped. In a direct-mapped cache, each
instruction block can be placed exactly in one cache block whose index is given by instruction block
number modulo number of blocks in the cache.
In this example, the second reference to b 2 will always hit in the cache because the first reference
to b 2 will bring b 2 into the cache and this cache block will not be replaced in the mean time. On
5 We regard a sequence of consecutive references to an instruction block as a single reference to the instruction
block without any loss of accuracy in the analysis.
struct pipeline cache timing information f
time t max;
reservation table head[ffi head ];
reservation table tail[ffi tail ];
block address first reference[n block ];
block address last reference[n block ];
Fig. 7. Structure of an element in a
the other hand, the reference to b 4 will always miss in the cache even when b 4 was previously in the
cache prior to this program construct because the first reference to b 2 will replace b 4 's copy in the
cache. (Note that b 2 and b 4 are mapped to the same cache block in the assumed cache configuration.)
Unlike the above two references whose hits or misses can be determined by local analysis, the hit or
miss of the first reference to b 2 cannot be determined locally and is dependent on the cache contents
immediately before executing this program construct. Similarly, the hit or miss of the reference
to b 3 will depend on the previous cache contents. The hits or misses of these two references will
affect the (worst case) execution time of this program construct. Moreover, the cache contents after
executing this program construct will, in turn, affect the execution time of the succeeding program
construct in a similar way. These timing variations, again, cannot be accurately represented by a
simple time-bound of the original timing schema.
This situation is similar to the case of pipelined execution discussed in the previous section
and, therefore, we adopt the same strategy; we simply extend the timing information of elements
in the WCTA leaving the timing formulas intact. Each element in the WCTA now has two sets
of instruction block addresses in addition to t max , head, and tail used for the timing analysis of
pipelined execution. Fig. 7 gives the data structure for an element in the WCTA in this new setting
where n block denotes the number of blocks in the cache.
In the given data structure, the first set of instruction block addresses (i.e., first reference)
maintains the instruction block addresses of the references whose hits or misses depend on the
cache contents prior to the program construct. In other words, this set maintains for each cache
block the instruction block address of the first reference to the cache block. The second set (i.e.,
last reference) maintains the addresses of the instruction blocks that will remain in the cache
after the execution of the program construct. In other words, this set maintains for each cache block
head
tail
bb
first-reference
last-referenceX X
ALU
RD
IF
MD
RD
ALU
MD
Fig. 8. Contents of the element corresponding to the example in Fig. 6
the instruction block address of the last reference to the cache block. These are the cache contents
that will determine the hits or misses of the instruction block references in the first reference
of the succeeding program construct. In calculating t max , we accurately account for the hits and
misses that can be locally determined such as the second reference to b 2 and the reference to b 4 in the
previous example. However, the instruction block references whose hits or misses are not known (i.e.,
those in first reference) are conservatively assumed to miss in the cache in the initial estimate of
t max . This initial estimate is later refined as the information on the hits or misses of those references
becomes available at a later stage of the analysis. Fig. 8 shows the timing information maintained
for the program construct given in the previous example.
With this extension, the timing formula of S:
This timing formula is structurally identical to the one given in the previous section for the sequential
statement. The differences are in the structure of the elements in the WCTAs and in the semantics
of the \Phi operation. The revised semantics of the \Phi operation is procedurally defined in Fig. 9.
The function concatenate given in the figure concatenates two input elements
puts the result into w 3 , thus implementing the \Phi operation. In lines 9-12 of function concatenate,
first reference if the corresponding cache block is accessed in w 1 . If the cache
block is not accessed in w 1 , the first reference to the cache block in w 1 \Phi w 2 is from w 2 . Therefore,
struct pipeline cache timing information
concatenate(struct pipeline cache timing information w 1 ,
3 struct pipeline cache timing information w 2 )
struct pipeline cache timing information w 3 ;
8 for
9 if (w 1
.first reference[i] == NULL)
.first reference[i];
else
.first reference[i];
.last reference[i] == NULL)
.last reference[i];
else
.last
.last reference[i];
.last reference[i] == w 2 .first reference[i])
.head
22 w 3
26 g
Fig. 9. Semantics of the \Phi operation
would inherit w 2 's first reference. Likewise, in lines 13-16, w 3 inherits w 2 's last reference
if the corresponding cache block is accessed in w 2 or w 1 's last reference otherwise. By comparing
first reference with w 1 's last reference, lines 17-18 determine how many of the memory
references in w 2 's first reference will hit in the cache. These cache hits are used to refine w 3 's
(Remember that all the memory references in w 2 's first reference were previously assumed
to miss in the cache in the initial estimate of w 2 's t max .) In lines 20-21, w 3 inherits w 1 's head and
taking into account the pipelined execution across w 1
and w 2 and the cache hits determined in lines 17-18. In this calculation, the \Phi pipeline operation is
the \Phi operation defined in the previous section for the timing analysis of pipelined execution and
miss penalty is the time needed to service a cache miss.
As before, an element in a WCTA can safely be eliminated (i.e., pruned) from the WCTA if we
can guarantee that the element's WCET is always shorter than that of some other element in the
same regardless of what the surrounding program constructs are. This condition for pruning
is procedurally specified in Fig. 10. The function prune given in the figure checks whether either one
struct pipeline cache timing information
prune(struct pipeline cache timing information w 1 ,
3 struct pipeline cache timing information w 2 )
7 for
8 if (w 1 .first reference[i] != w 2 .first reference[i])
9 n diff ++;
.last reference[i] != w 2 .last reference[i])
else
else
19 return NULL;
Fig. 10. Semantics of pruning operation
of the two execution paths corresponding to the two input elements can be pruned and
returns the pruned element if the pruning is successful and null if neither of them can be pruned.
In the function prune, lines 6-12 determine how many entries in w 1 's first reference and
last reference are different from the corresponding entries in w 2 's first reference and
last reference. The difference bounds the cache memory related execution time variation between
checks whether w 2 can be pruned by w 1 . Pruning of w 2 by w 1 can be
made if w 2 's WCET assuming the worst case scenario for w 2 is shorter than w 1 's WCET assuming
best case scenario. Likewise, line 16 checks whether w 1 can be pruned by w 2 .
Again as before, the timing formula of S: if (exp) then S 1 else S 2 is given by
As in the previous section, the problem of calculating W (S) for a loop statement S: while (exp)
S 1 can be formulated as a graph theoretic problem. Here, wcta(wp N
ij ) is given by
:first reference; p j :last reference)
After calculating wcta(wp N
can be computed as follows:
The loop timing analysis discussed in the previous section assumes that each loop iteration benefits
only from the immediately preceding loop iteration. This is because in the calculation of w ij , we only
consider the execution time reduction of p j due to the execution overlap with p i . This assumption
holds in the case of pipelined execution since the execution time of an iteration's head is affected
only by the tail of the immediately preceding iteration. In the case of cache memory, however,
the assumption does not hold in general. For example, an instruction memory reference may hit to
a cache block that was loaded into the cache in an iteration other than the immediately preceding
one. Nevertheless, since the assumption is conservative, the resulting worst case timing analysis is
safe in the sense that the result does not underestimate the WCET of the loop statement. The
degradation of accuracy resulting from this conservative assumption can be reduced by analyzing a
sequence of k (k ? 1) iterations at the same time rather than just one iteration [25]. In this case,
each vertex represents an execution of a sequence of k iterations and w ij is the execution time of
sequence j when its execution is immediately preceded by an execution of sequence i . This analysis
corresponds to the analysis of the loop unrolled k times and trades increased analysis complexity
for more accurate
a) Set associative caches: Up to now we have considered only the simplest cache organization
called the direct-mapped cache in which each instruction block can be placed exactly in one cache
block. In a more general cache organization called the n-way set associative cache, each instruction
block can be placed in any one of the n blocks in the mapped set 6 . Set associative caches need a
policy that decides which block to replace among the blocks in the set to make room for a block
fetched on a cache miss. The LRU (Least Recently Used) policy is typically used for that purpose.
Once this replacement policy is given (assuming that it is not random), it is straightforward to
implement \Phi and prune operations needed in our analysis method.
6 In a set associative cache, the index of the mapped set is given by instruction block number modulo number of
sets in the cache.
The timing analysis of data caches is analogous to that of instruction caches. However, the former
differs from the latter in several important ways. First, unlike instruction references, the actual
addresses of some data references are not known at compile-time. This complicates the timing
analysis of data caches since the calculation of first reference and last reference, which is
the most important aspect of our cache timing analysis, assumes that the actual address of every
memory reference is known at compile-time. This complication, however, can be avoided completely
if a simple hardware support in the form of one bit in each load/store instruction is available. This
bit, called allocate bit, decides whether the memory block fetched on a miss will be loaded into
the cache. For a data reference whose address cannot be determined at compile-time, the allocate
bit is set to zero preventing the memory block fetched on a miss from being loaded into the cache.
For other references, this bit is set to one allowing the fetched block to be loaded into the cache.
With this hardware support, the worst case timing analysis of data caches can be performed very
much like that of instruction caches, i.e. treating the references whose addresses are not known at
compile-time as misses and completely ignoring them in the calculation of first reference and
last reference. Even when such hardware support is not available, the worst case timing analysis
of data caches is still possible by taking two cache miss penalties for each data reference whose
address cannot be determined at compile-time, and then ignoring the reference in the analysis [27].
The one cache miss penalty is due to the fact that the reference may miss in the cache. The other
is due to the fact that the reference may replace a cache block that contributes a cache hit in our
analysis.
The second difference stems from accesses to local variables. In general, data area for local
variables of a function, called the activation record of the function, is pushed and popped on a
runtime stack as the associated function is called and returned. In most implementations, a specially
designated register, called sp (Stack Pointer), marks the top of the stack and each local variable
is addressed by an offset relative to sp. The offsets of local variables are determined at compile-
time. However, the sp value of a function differs depending on from where the function is called.
However, the number of distinct sp values a function may have is bounded. Therefore, the
of a function can be computed for each sp value the function may have. Such sp values can be
calculated from the activation record sizes of functions and the call graph.
The final difference is due to write accesses. Unlike instruction references, which are read-only,
data references may both read from and write to memory. In data caches, either write-through or
write-back policy is used to handle write accesses [18]. In the write-through policy, the effect of each
write is reflected on both the block in the cache and the block in main memory. On the other hand,
in the write-back policy, the effect is reflected only on the block in the cache and a dirty bit is set
to indicate that the block has been modified. When a block whose dirty bit is set is replaced from
the cache, the block's contents are written back to main memory.
The timing analysis of data caches with the write-through policy is relatively simple. One simply
has to add a delay to each write access to account for the accompanying write access to main memory.
However, the timing analysis of data caches with the write-back policy is slightly more complicated.
In a write-back cache, a sequence of write accesses to a cached memory block without a replacement
in-between, which we call a write run, requires only one write-back to main memory. We attribute
this write-back overhead (i.e., delay) to the last write in the write run, which we call the tail of
the write run. With this setting, one has to determine whether a given write access can be a tail
to accurately estimate the delay due to write-backs. In some cases, local analysis can determine
whether a write access is a tail or not as in the case of hit/miss analysis for a memory reference.
However, local analysis is not sufficient to determine whether a write access is a tail in every case.
Hence, when this is not possible, we conservatively assume that the write access is a tail and add a
write-back delay to t max . However, if later analysis over the program syntax tree reveals that the
write access is not a tail, we subtract the incorrectly attributed write-back delay from t max . This
global analysis can be performed by providing a few bits to each block in first reference and
last reference and augmenting the \Phi and pruning operations [27].
VI EXPERIMENTAL RESULTS
We tested whether our extended timing schema approach could produce useful WCET bounds by
building a timing tool based on the approach and comparing the WCET bounds predicted by the
timing tool to the measured times. Our timing tool consists of a compiler and a timing analyzer (cf.
Fig. 11). The compiler is a modified version of an ANSI C compiler called lcc [28]. The modified
compiler accepts a C source program and generates the assembly code along with the program syntax
information and the call graph. The timing analyzer uses the assembly code and the program syntax
WCET
WCEP
Modified
Information
User-provided
Graph
Call
Information
Program
Code
Assembly
Program Analyzer
Timing
Fig. 11. Overview of the timing tool
information along with user-provided information (e.g., loop bound) to compute the WCET of the
program.
We chose an IDT7RS383 board as the timing tool's target machine. The target machine's CPU is
a 20 MHz R3000 processor which is a typical RISC processor. The R3000 processor has a five-stage
integer pipeline and an interface for off-chip instruction and data caches. It also has an interface for
an off-chip Floating-Point Unit (FPU).
The IDT7RS383 board contains instruction and data caches of 16 Kbytes each. Both caches are
direct-mapped and have block sizes of 4 bytes. The data cache uses the write-through policy and has
a one-entry deep write buffer. The cache miss service times of both the instruction and data caches
are 4 cycles. The FPU used in the board is a MIPS R3010. Although the board has a timer chip that
provides user-programmable timers, their resolutions are too low for our measurement purposes. To
facilitate the measurement of program execution times in machine cycles, we built a daughter board
that consists of simple decoding circuits and counter chips, and provides one user-programmerable
timer. The timer starts and stops by writing to specific memory locations and has a resolution of
one machine cycle (50 ns).
Three simple benchmark programs were chosen: Clock, Sort and MM. The Clock benchmark is a
program used to implement a periodic timer. The program periodically checks 20 linked-listed timers
and, if any of them expires, calls the corresponding handler function. The Sort benchmark sorts an
array of 20 integer numbers and the MM program multiplies two 5 \Theta 5 floating-point matrices.
Table
1 compares the WCET bounds predicted by the timing tool and the measured execution
times for the three benchmark programs. In all three cases, the tool gives fairly tight WCET bounds
(within a maximum of about 30% overestimation). A closer inspection of the results revealed that
Clock Sort MM
Predicted
Measured 2768 11471 6346
(unit: machine cycles)
Table
1. Predicted and measured execution times of the benchmark programs
more than 90% of the overestimation is due to data references whose addresses are not known at
compile-time. (Remember that we have to account for two cache miss penalties for each such data
reference.)
Program execution time is heavily dependent on the program execution path, and the logic of
most programs severely limits the set of possible execution paths. However, we intentionally chose
benchmark programs that do not suffer from overestimation due to infeasible paths. The rationale
behind this selection is that predicting tighter WCET bounds by eliminating infeasible paths using
dynamic path analysis is an issue orthogonal to our approach and that this analysis can be introduced
into the existing timing tool without modifying the extended timing schema framework. In fact, a
method for analyzing dynamic program behavior to eliminate infeasible paths of a program within
the original timing schema framework is given in [29] and we feel that our timing tool will equally
benefit from the proposed method.
We view our experimental work reported here as an initial step toward validating our extended
timing schema approach. Clearly, much experimental work, especially with programs used in real
systems, need to follow to demonstrate that our approach is practical for realistic systems.
VII CONCLUSION
In this paper, we described a technique that aims at accurately estimating the WCETs of tasks
for RISC processors. In the proposed technique, two kinds of timing information are associated
with each program construct. The first type of information is about the factors that may affect the
timing of the succeeding program construct. The second type of information is about the factors
that are needed to refine the execution time of the program construct when the first type of timing
information of the preceding program construct becomes available at a later stage of WCET analysis.
We extended the existing timing schema using these two kinds of timing information so that we can
accurately account for the timing variations resulting from the history sensitive nature of pipelined
execution and cache memory. We also described an optimization that minimizes the overhead of
the proposed technique by pruning the timing information associated with an execution path that
cannot be part of the worst case execution path.
We also built a timing analyzer based on the proposed technique and compared the WCET
bounds of sample programs predicted by the timing analyzer to their measured execution times.
The timing analyzer gave fairly tight predictions (within a maximum of about 30% overestimation)
for the benchmark programs we used and the sources of the overestimation were identified.
The proposed technique has the following advantages. First, the proposed technique makes
possible an accurate analysis of combined timing effects of pipelined execution and cache memory,
which, previously, was not possible. Second, the timing analysis using the proposed technique is
more accurate than that of any other technique we are aware of. Third, the proposed technique
is applicable to most RISC processors with in-order issue and single-level cache memory. Finally,
the proposed technique is extensible in that its general rule may be used to model other machine
features that have history sensitive timing behavior. For example, we used the underlying general
rule to model the timing variation due to write buffers [27].
One direction for future research is to investigate whether or not the proposed technique applies
to more advanced processors with out-of-order issue [30] and/or multi-level cache hierarchies
[18]. Another research direction is in the development of theory and methods for the design of a
retargetable timing analyzer. Our initial investigation on this issue was made in [31]. The results
indicated that the machine-dependent components of our timing analyzer such as the routines that
implement the concatenation and pruning operations of the extended timing schema can be automatically
generated from an architecture description of the target processor. The details of the
approach are not repeated here and interested readers are referred to [31].
--R
"Reasoning About Time in Higher-Level Language Software,"
"SMART (Strategic Memory Allocation for Real-Time) Cache Design,"
"Software-Based Cache Partitioning for Real-time Applications,"
"Timing Analysis of Superscalar Processor Programs Using ACSR,"
"High-Level Timing Specification of Instruction-Level Parallel Processors,"
"A Retargetable Technique for Predicting Execution Time,"
"Deterministic Upperbounds of the Worst-Case Execution Times of Cached Programs,"
"Evaluating Tight Execution Time Bounds of Programs by Annotations,"
"Predictable Real-Time Caching in the Spring System,"
"Bounding Worst-Case Instruction Cache Performance,"
"Portable Execution Time Analysis for RISC Processors,"
"Experiments with a Program Timing Tool Based on Source-Level Timing Schema,"
"Calculating the MaximumExecution Time of Real-Time Programs,"
"Static Analysis of Cache Performance for Real-Time Programming,"
"Pipelined Processors and Worst-Case Execution Times,"
Computer Architecture: A Quantitative Approach.
Aspects of Cache Memory and Instruction Buffer Performance.
"Register Allocation by Priority-based Coloring,"
The Architecture of Pipelined Computers.
Englewood Cliffs
Crafting a Compiler with C.
"A Characterization of the Minimum Cycle Mean in a Digraph,"
"Instruction Cache and Pipelining Analysis Technique for Real-Time Systems,"
Predicting Deterministic Execution Times of Real-Time Programs
"Data Cache Analysis Techniques for Real-Time Systems,"
"A Code Generation Interface for ANSI C,"
"Predicting Program Execution Times by Analyzing Static and Dynamic Program Paths,"
"Look-ahead Processors,"
"Retargetable Timing Analyzer for RISC Processors,"
--TR
--CTR
Hassan Aljifri , Alexander Pons , Moiez Tapia, The estimation of the WCET in super-scalar real-time system, Real-time system security, Nova Science Publishers, Inc., Commack, NY,
Jan Staschulat , Rolf Ernst, Scalable precision cache analysis for preemptive scheduling, ACM SIGPLAN Notices, v.40 n.7, July 2005
Minsoo Ryu , Jungkeun Park , Kimoon Kim , Yangmin Seo , Seongsoo Hong, Performance re-engineering of embedded real-time systems, ACM SIGPLAN Notices, v.34 n.7, p.80-86, July 1999
Dongkun Shin , Jihong Kim , Seongsoo Lee, Low-energy intra-task voltage scheduling using static timing analysis, Proceedings of the 38th conference on Design automation, p.438-443, June 2001, Las Vegas, Nevada, United States
Xianfeng Li , Abhik Roychoudhury , Tulika Mitra, Modeling out-of-order processors for WCET analysis, Real-Time Systems, v.34 n.3, p.195-227, November 2006
Jurgen Schnerr , Oliver Bringmann , Wolfgang Rosenstiel, Cycle Accurate Binary Translation for Simulation Acceleration in Rapid Prototyping of SoCs, Proceedings of the conference on Design, Automation and Test in Europe, p.792-797, March 07-11, 2005
Dongkun Shin , Jihong Kim , Seongsoo Lee, Intra-Task Voltage Scheduling for Low-Energy, Hard Real-Time Applications, IEEE Design & Test, v.18 n.2, p.20-30, March 2001
Jrn Schneider , Christian Ferdinand, Pipeline behavior prediction for superscalar processors by abstract interpretation, ACM SIGPLAN Notices, v.34 n.7, p.35-44, July 1999
Sheayun Lee , Sang Lyul Min , Chong Sang Kim , Chang-Gun Lee , Minsuk Lee, Cache-Conscious Limited Preemptive Scheduling, Real-Time Systems, v.17 n.2-3, p.257-282, Nov. 1999
Henrik Theiling, Generating Decision Trees for Decoding Binaries, ACM SIGPLAN Notices, v.36 n.8, p.112-120, Aug. 2001
Joosun Hahn , Rhan Ha , Sang Lyul Min , Jane W.-S. Liu, Analysis of Worst Case DMA Response Time in a Fixed-Priority Bus Arbitration Protocol, Real-Time Systems, v.23 n.3, p.209-238, November 2002
Tobias Schuele , Klaus Schneider, Abstraction of assembler programs for symbolic worst case execution time analysis, Proceedings of the 41st annual conference on Design automation, June 07-11, 2004, San Diego, CA, USA
Thomas Lundqvist , Per Stenstrm, An Integrated Path and Timing Analysis Method based onCycle-Level Symbolic Execution, Real-Time Systems, v.17 n.2-3, p.183-207, Nov. 1999
Daniel Kstner , Stephan Thesing, Cache Aware Pre-Runtime Scheduling, Real-Time Systems, v.17 n.2-3, p.235-256, Nov. 1999
Colin Fidge , Peter Kearney , Mark Utting, A Formal Method for Building Concurrent Real-Time Software, IEEE Software, v.14 n.2, p.99-106, March 1997
Henrik Theiling , Christian Ferdinand , Reinhard Wilhelm, Fast and Precise WCET Prediction by Separated Cache andPath Analyses, Real-Time Systems, v.18 n.2-3, p.157-179, May 2000
Joan Krone , William F. Ogden , Murali Sitaraman, Performance analysis based upon complete profiles, Proceedings of the 2006 conference on Specification and verification of component-based systems, November 10-11, 2006, Portland, Oregon
Jungkeun Park , Minsoo Ryu , Seongsoo Hong , Lucia Lo Bello, Rapid performance re-engineering of distributed embedded systems via latency analysis and k-level diagonal search, Journal of Parallel and Distributed Computing, v.66 n.1, p.19-31, January 2006
Andreas Ermedahl , Friedhelm Stappert , Jakob Engblom, Clustered Worst-Case Execution-Time Calculation, IEEE Transactions on Computers, v.54 n.9, p.1104-1122, September 2005
Xianfeng Li , Tulika Mitra , Abhik Roychoudhury, Modeling control speculation for timing analysis, Real-Time Systems, v.29 n.1, p.27-58, January 2005
Friedhelm Stappert , Andreas Ermedahl , Jakob Engblom, Efficient longest executable path search for programs with complex flows and pipeline effects, Proceedings of the 2001 international conference on Compilers, architecture, and synthesis for embedded systems, November 16-17, 2001, Atlanta, Georgia, USA
Andreas Ermedahl , Friedhelm Stappert , Jakob Engblom, Clustered calculation of worst-case execution times, Proceedings of the international conference on Compilers, architecture and synthesis for embedded systems, October 30-November 01, 2003, San Jose, California, USA
Sheayun Lee , Jaejin Lee , Chang Yun Park , Sang Lyul Min, Selective code transformation for dual instruction set processors, ACM Transactions on Embedded Computing Systems (TECS), v.6 n.2, p.10-es, May 2007
Gustavo Gmez , Yanhong A. Liu, Automatic time-bound analysis for a higher-order language, ACM SIGPLAN Notices, v.37 n.3, p.75-86, March 2002
Ian J. Hayes, Procedures and parameters in the real-time program refinement calculus, Science of Computer Programming, v.64 n.3, p.286-311, February, 2007
Karl Lermer , Colin J. Fidge , Ian J. Hayes, A theory for execution-time derivation in real-time programs, Theoretical Computer Science, v.346 n.1, p.3-27, 23 November 2005
Y. A. Liu , G. Gmez, Automatic Accurate Cost-Bound Analysis for High-Level Languages, IEEE Transactions on Computers, v.50 n.12, p.1295-1309, December 2001
C. J. Fidge, Real-Time Schedulability Tests for Preemptive Multitasking, Real-Time Systems, v.14 n.1, p.61-93, Jan. 1998
Christian Ferdinand , Reinhard Wilhelm, Efficient and Precise Cache Behavior Prediction for Real-TimeSystems, Real-Time Systems, v.17 n.2-3, p.131-181, Nov. 1999
Vasanth Venkatachalam , Michael Franz, Power reduction techniques for microprocessor systems, ACM Computing Surveys (CSUR), v.37 n.3, p.195-237, September 2005 | pipelined execution;cache memory;worst case execution time;real-time system;RISC processor |
631174 | Conversion of Units of Measurement. | Algorithms are presented for converting units of measurement from a given form to a desired form. The algorithms are fast, are able to convert any combination of units to any equivalent combination, and perform dimensional analysis to ensure that the conversion is legitimate. Algorithms are also presented for simplification of symbolic combinations of units. Application of these techniques to perform automatic unit conversion and unit checking in a programming language is described. | Introduction
Although many programming languages are described as "strongly typed", in most languages
the types of numeric quantities are described only in terms of the numeric representation
(real, integer, etc.) but not in terms of the units of measurement (meters, feet, etc.) of
the quantity represented by a variable. The assignment of a value represented in one unit to
Computer equipment used in this research was furnished by Hewlett Packard and IBM.
a variable that is assumed to be in a different unit is clearly an error, but such errors cannot
be detected if the type system does not include units of measurement. Conversion of units
must be done explicitly by the programmer; this can be both burdensome and error-prone,
since the conversion factors used by the programmer might be entered incorrectly or might
have limited accuracy. Failure to represent units explicitly within program code is a serious
shortcoming in specification of the program, since later modification of the program might
be performed under the assumption of the wrong units. Hundreds of units of measurement
are in general use; entire books [2] [13] [25] [27] [29] are devoted to tables of unit conversions.
This paper presents methods for symbolic representation of units of measurement. Efficient
algorithms are presented that can convert any combination of units to any equivalent
combination, while verifying the legitimacy of the conversion by dimensional analysis. Algorithms
are also presented for simplification of combinations of symbolic units. Applications
of these techniques in programming languages and in data conversion are discussed.
Related Work
2.1 Units and Unit Conversion
[18] and [14] describe the Syst'eme International or International System of Units, abbreviated
SI; these are the definitive references on SI. [4] provides style guidelines for use of SI units
and tables of conversion factors.
Several books provide conversion factors and algorithms for use in unit conversion. The
available books differ widely in the number of units covered, the accuracy of the conversion
factors, and the algorithms that some books present for unit conversion. Although one might
think that unit conversion is easy and "everyone knows how to do it", the number of books
and the variety of methodologies and algorithms they present suggest otherwise.
Horvath [13] has an especially complete coverage of different units, as well as an extensive
bibliography. The tables in this book give conversion factors from a given unit to a single SI
unit; this is similar to the approach taken in the present paper, although Horvath does not
present conversion algorithms per se.
Semioli and Schubert [27] present voluminous tables that combine multiplication of the
conversion factor by the quantity of the source unit to be converted. They also present
somewhat complex methods for obtaining additional accuracy and shifting the decimal place
of the result. This book has the flavor of a book of logarithm tables, although it was published
in 1974, when pocket calculators were available.
presents a series of directed acyclic graphs; each node of a graph is a unit,
and arcs between nodes are labeled with conversion factors. The nodes are ordered by size
of the unit. In order to convert from one unit to another, the user traverses the graph from
the source unit to the goal unit, multiplying together all the conversion factors encountered
along the way (or dividing if moving against the directed graph arrows). Although this
technique presents many conversions together in a compact structure, its use involves many
steps and thus many opportunities for error and loss of accuracy.
Karr and Loveman [15] outline a computational method for finding conversion factors.
Their method involves writing dimensional quantities and conversion factors in terms of
logarithms, making a matrix of the equations in logarithmic form, and solving the matrix by
linear algebra. Since the size of the matrix is the number of units involved in the conversion
multiplied by the number of units the system knows about, both the matrix and the time
required to solve it could quickly become large.
Schulz [26] describes COMET, an APL program for converting measurements from the
English system used in the U.S. to the metric system. COMET focuses on conversion of
machine part specifications that include allowable tolerances.
Gruber and Olsen [9] describe an ontology for engineering mathematics, including representation
of units of measurement as an Abelian group. Their system can convert units,
presumably by a process of logical deduction that would be significantly slower than the
methods we describe.
2.2 Units in Programming Languages
Units of measurement are allowed in the ATLAS language [5], although ATLAS allows only
a limited set of units and a limited language for constructing combinations of units.
Cunis [8] describes Lisp programs for converting units. These programs combine units
with numeric measurements at runtime and perform runtime conversion. While this is
consistent with the Lisp tradition of runtime type checking, it does not allow detection
of conversion errors at compile time. Gehani [10] argues in favor of compile-time checking.
Hilfinger [12] describes methods for including units with numeric data using Ada packages
and discusses modifications of Ada compilers that would be required to make use of these
packages efficient and allow compile-time checking of correctness of conversions.
Karr and Loveman [15] propose incorporation of units into programming languages; they
discuss methods of unit conversion, dimensional analysis, and language syntax issues. We
believe that the unit conversion algorithms described in the present paper are simpler: our
methods require only one scalar operation per unit for conversion and one scalar operation
per unit for checking, whereas the methods of [15] are based on manipulation of matrices
that could be large.
2.3 Data Translation
Reusing an existing procedure may require that data be translated into the form expected
by that procedure; we describe in [21] some methods for semi-automatic data translation.
If a procedure requires that its data be presented in particular units, then unit conversion
may also be required. The unit conversion methods of this paper can be combined with the
methods of [21] to accomplish this.
Unit conversion may also be required in preparing data for transmission to a remote site
over a network, or for use in a remote procedure call. IDL (Interface Description Language)
[16] allows exchange of large structured data, possibly including structure sharing, between
separately written components of a large software system such as a compiler. Use of IDL
requires that the user write precise specifications of the source and target data structures.
Herlihy and Liskov [11] describe a method for transmission of structured data over a network,
with a possibly different data representation at the destination. Their method employs user-written
procedures to encode and decode the data into transmissible representations.
3 Unit Representation
We use the term simple unit to refer to any named unit for which an appropriate conversion
factor and dimension (as described later) have been defined. Simple units include the
base units of a system of measurement, such as meter and kilogram, named units such
as horsepower that can be defined in terms of other units, and the SI prefixes such as nano
that are used for scaling. Positive, nonzero numeric constants are also allowed as simple
units. In addition, common abbreviations may be defined as synonyms for the actual units,
e.g., kg is defined as a synonym for kilogram.A composite unit is a product or quotient of units, and a unit is either a simple unit or
a composite unit. We represent units in Lisp syntax, so that composite units are written
within parentheses, preceded by an operator that is * or /. Thus, the syntax of units is as
follows:
simple-unit ::= symbol j number
unit ::= simple-unit j composite-unit
We say that a unit is normalized if nested product and quotient terms have been removed
as far as possible, so that the unit will be at most a quotient of two products. Clearly, any
unit can be normalized; algorithms for simplification of units are described later.
normalized-unit
A single numeric conversion factor is associated with each simple unit. The conversion factor
is the number by which a quantity expressed in that unit must be multiplied in order to be
expressed in the equivalent unit in the standard system of units. We have chosen as our
Treating abbreviations as synonyms, rather than giving them a conversion factor and dimension and
treating them as units, avoids the possibility that slightly different numeric factors might be specified for
the same unit under different names.
standard the SI (Syst'eme International d'Unit'es) system of units [18] [14] [1]; a different
standard system could be chosen without affecting the algorithms described here. Thus,
the conversion factor for meter is 1.0, while the conversion factor for foot is 0.3048 since
meter. The conversion factor for a numeric constant is just the constant
itself.
The conversion factor for a product or quotient of units is, respectively, the product
or quotient of the factors for the component units. Based on these definitions, it is easy
to define a recursive algorithm that computes the conversion factor for any unit, whether
simple or composite. This algorithm is shown in pseudo-code form in Fig. 1. Assuming that
the definitions of units are acyclic, the algorithm is guaranteed to terminate and requires
time proportional to the size of the unit expression tree.
function
1. if unit is a simple unit (number or symbol),
(a) if unit is a number, return unit;
(b) if unit is a synonym of unit 0 , return factor(unit 0 );
(c) if unit has a predefined conversion factor f , return f ;
(d) else, error: unit is undefined.
2. otherwise,
(a) if unit is a product, (*
(b) if unit is a quotient, (/ u 1
(c) else, error: unit has improper form.
Figure
1: Conversion Factor Algorithm
Our system provides facilities for defining named simple units with specified numeric
conversion factors and for defining units in terms of previously defined units. Examples of
unit definitions are shown in Fig. 2. In the first example, each unit is defined by its name,
numeric conversion factor, and a list of synonyms. In the second example, the conversion
factor is specified as an expression in terms of previously defined units.
We now consider conversion from a source unit, unit s , to a desired unit, unit d . Let q s
be the numeric quantity expressed in the source unit, q SI be the equivalent quantity in the
standard (SI) system, and q d be the quantity in the desired unit. Let f
the conversion factor of the source unit and f be the conversion factor of
the desired unit. Then we have the equations:
f d
(defsimpleunits 'length
'((meter 1.0 (m meters))
(foot 0.3048 (ft feet))
(angstrom 1.0e-10 (a angstroms))
(defderivedunits 'force
'((newton
(/ (* kilogram meter) (* second second))
(nt newtons))
(pound-force
(/ (* slug foot) (* second second))
(ounce-force
(/ pound-force 16)
Figure
2: Unit Definitions
Thus, conversion to a desired unit is accomplished by multiplying the source quantity by a
f d
For example, to convert a measurement in terms of feet to centimeters, the factor would be
0:01
Given the factor algorithm shown in Fig. 1 that computes the factor for any simple or
composite unit, it is easy to convert any combination of units to any equivalent combination.
Since a number is also defined as a unit, a numeric quantity of a given unit can also be
converted. The convert function takes as arguments the source unit and desired unit; it
returns the conversion factor f , or NIL if the conversion is undefined or incorrect.
?(convert 'foot 'centimeter)?(convert 'meters 'feet)?(convert '(/ pi
(* micro fortnight))
'(/ inch sec))?(convert '(* acre foot) 'tablespoons)?(convert '(/ (* mega pound-force) acre)
'kilopascals)?(convert 'kilograms 'meters)
The last example, an attempt to make an incorrect conversion from kilograms to meters,
gives a result of NIL. Dimensional analysis, as described in the next section, is used to verify
that a requested conversion is correct; if not, a result of NIL is returned rather than a numeric
conversion factor.
There are certain conversions that, while not strictly correct, deserve a special note.
The pound, for example, which is a unit of mass, is often used as the name of the force
unit that is properly called pound-force. [The same confusion exists with other mass units
such as the ounce and the kilogram.] Conversion of pounds to pounds-force involves an
apparent conversion from mass to force. Similarly, in particle physics the mass of a particle
is often described using energy units such as gigaelectronvolts (GeV). As described below,
the dimensional analysis system can either perform strict dimensional checking and prohibit
force-mass and energy-mass conversion, or it can be made to detect and allow these specific
conversions, with strict checking otherwise. In the latter case, the following conversions are
allowed:
?(convert 'pound-force 'kilogram)?(convert '(* 1.67e-27 kg) 'gev)The conversion from mass units to the corresponding force units requires multiplication
by g, the conventional value for the acceleration of free fall at the earth's surface (sometimes
called "gravity"), while the conversion from mass units to energy units requires multiplication
by the square of the speed of light. Each of these is a physical constant, expressed in SI
units.
5 Dimensional Analysis
If the above algorithms are to produce meaningful results, it must be verified that the
requested conversion is legitimate; it is clearly impossible, for example, to convert kilograms
to meters. Correctness of unit conversion is verified by the long-established technique of
dimensional analysis [6]: the source and goal units must have the same dimensions.
Formally, we define a dimension as an 8-vector of integral powers of eight base quantities.
The base quantities are shown in Fig. 3 together with the base unit that is used for each
quantity in the SI system [18] [14]. We have added money, which is not part of the SI system,
as a dimension.
Figure
3: Base Quantities and Units
The dimension of a product of units is the vector sum of the dimensions of its components,
while the dimension of a quotient of units is the vector difference of the dimensions of its
components. It can be verified that conversion from one unit to another is legitimate by
showing that the dimension vectors of the two units are equal, or equivalently, that their
difference is a zero vector.
The powers of base quantities that are encountered in practice are usually small: they
are seldom outside the range \Sigma4. While a dimension can be represented as a vector of eight
integer values, with dimension checking done by operations on vectors, this is somewhat
expensive computationally. Since the integers in the vector are small, it might be more
efficient to pack them into bit fields within an integer word. In this section, we describe a
variation of this packing technique. A dimension vector is encoded within a single 32-bit
integer, which we call a dimension integer, using the algorithms presented below. Using this
encoding, dimensions can be added, subtracted, or compared using ordinary scalar integer
arithmetic.
It may be helpful to consider the analogy of doing vector arithmetic by encoding vectors
as decimal integers. For example, the vector operation [1 2 3]
be simulated using decimal integers: 123 This technique will work as long as
it can be guaranteed that there will not be a "carry" from one column of the decimal integer
to another. We use a similar method to encode a dimension vector as a 32-bit integer. A
careful justification of the conditions under which use of the integer encoding is correct is
presented following the algorithms. Finally, we argue that these conditions will be satisfied
in practice, so that use of the integer encoding for dimension checking is justified.
We define two 8-vectors and an integer constant (shown in decimal notation) as follows:
800000 8000000 80000000]
We assume for purposes of this presentation that the 8-vectors are indexed beginning with 0;
the index into an 8-vector for each kind of quantity corresponds to the Index column shown
in Fig. 3. The vector dimsizes gives the size of the field assigned to each quantity; e.g.,
dimsizes[0] is 20, corresponding to a field size of 20 and an allowable value range of \Sigma9
for the power of length. The vector dimvals gives multipliers that can be used to move a
vector value to its proper field position; it is defined as follows:
dimvals
dimvals
The integer dimbias is a value that, when added to a dimension integer, will make it positive
and will bias each vector component within its field by half the size of the field. dimbias is
defined as:
dimbias =X
dimvals i \Delta dimsizes iGiven these definitions, algorithms are easily defined to convert between an 8-vector form of
dimension and the equivalent dimension integer. A dimension integer is easily derived from
a dimension vector v as the vector dot product of v and dimvals:
dimint(v) =X
For example, the dimension integer for a force can be calculated as follows:
A dimension integer can be converted back to an 8-vector by adding dimbias to it and
then extracting the values from each field. This algorithm is not needed for unit conversion,
procedure dimvect (n, v)
integer m, sz, mm;
for to 7 do
begin
sz := dimsizes[i];
Figure
4: Conversion of Dimension Integer to Dimension Vector
but is provided for completeness. The algorithm, shown in Fig. 4, has as arguments a
dimension integer n and an 8-vector v; it stores the dimension values derived from n into
v. This procedure uses truncated division to extract the biased value from each field of the
integer encoding. The bias value, sz / 2, is then removed to yield the signed field value.
Dividing by the field size is then used to bring the next field into the low-order position.
Our algorithm uses dimension integers, rather than dimension vectors, to check the
correctness of requested unit conversions. Addition, subtraction, and comparison of dimension
vectors are simulated by scalar addition, subtraction, and comparison of corresponding
dimension integers. We can state the following theorems regarding dimension integers:
Theorem 1 If u and v are dimension vectors, then:
if
These results follow immediately from the definition of dimint.
Theorem 2 If u and v are dimension vectors, and dimint(u) = dimint(v), and
dimsizes
dimsizes i
then
Proof: Suppose that dimint(u) = dimint(v) but u 6= v. Suppose that u 0 6= v 0 . By the
definition of dimint and dimvals,
Therefore,
and by the triangle inequality, j u
but this is contrary to our assumptions that j u 0 j! dimsizes 0and j v 0 j! dimsizes 0. Therefore,
it must be the case that u inductive repetition of this argument on the remaining
elements of u and v , it must be the case that
These theorems show that checking the dimensions of unit conversions by means of
dimension integers is correct so long as the individual dimension quantities are less than
half the field sizes given in the dimsizes vector. We justify the use of the integer encoding
of dimension vectors as follows. The powers of dimension quantities that are found in units
that are used in practice are generally small - usually within the range \Sigma4. If a field size
of 20 is assigned to length, time, and temperature, and a field size of 10 is assigned to the
others, the dimension vector will fit within a 32-bit integer. The representation allows a
power of \Sigma( dimsizes i\Gamma 1) for each quantity. As long as each element of a dimension vector
is within this range, two dimension vectors are equal if and only if their corresponding
dimension integers are equal; furthermore, integer addition and subtraction of dimension
integers produce results equal to the dimension integers of the vector sum and difference of
the corresponding dimension vectors. Our representation allows a power of \Sigma9 for length,
time, and temperature, and a power of \Sigma4 for mass, current, substance, luminosity, and
money. This should be quite adequate. We note that dimension vectors are used only in tests
of equality: unequal dimensions of source and goal units indicate an incorrect conversion.
An "overflow" from a field of the vector in the integer representation will not cause an error
to be indicated when correct unit conversions are performed, because the two dimension
values will still be equal despite the overflow. Two unequal dimension vectors will appear
unequal, despite an overflow, unless the incorrect dimension integer corresponds to a very
different kind of unit that has a dimension value that happens to be exactly equal; this is
most unlikely to happen accidentally. For example, if the user attempts to convert a 20th
power of length into a time, the system will fail to detect an error. This is such an unlikely
occurrence that we consider the use of the more efficient integer encoding to be justified.
Note, however, that 8-vectors could be used for dimension checking instead if desired.
Cunis [8] describes an alternative representation of dimensions. He represents dimensions
as a rational number in Lisp, i.e., as a ratio of integers that represent the positive and
negative powers of dimensions. Each base quantity, such as length, is assigned a distinct
small prime; the product of these, raised to the appropriate powers, forms the integer used in
the ratio. This method requires somewhat more storage and computation than the method
we present, and arithmetic overflow could be a problem if extended-precision arithmetic is
not used; since Lisp provides extended-precision integers, this is not a problem in Lisp.
5.1 Unit Conversion Checking
The dimension integer corresponding to a unit can be found as follows. The dimension of
a constant is 0; this is also the case for units such as radian or nano 2 . The dimension of a
base quantity is given by the corresponding value in the vector dimvals; for example, the
dimension of time is dimvals[1] or 20. The dimension integer of a product of units is the
sum of their dimension integers (using ordinary 32-bit integer arithmetic), and the dimension
of a quotient of units is the difference of their dimensions. Dimensions of common abstract
units such as force are found by computing the dimension of their expansion in terms of
base abstract units; for force this expansion is:
We also define an abstract unit dimensionless with dimension integer 0. When a unit
symbol is defined to the system, its dimension is determined from the abstract unit specified
for it; thus, in Fig. 2, meter receives the dimension of length. When a unit is defined by an
expansion in terms of other units, the dimension of the expansion is verified by comparison
with the dimension of its abstract unit.
When convert is called to convert one unit to another, it also computes the dimension
of the source unit minus the dimension of the goal unit. If the difference is 0, the dimensions
are the same, and the conversion is legitimate. A nonzero value indicates a difference in
dimensions of the source and goal units.
If strict conversion is desired, any difference in dimension is treated as an error. In some
cases, however, it may be desired to allow automatic conversion between mass and force
or between mass and energy. Each of these conversions will produce a unique difference
signature, which can be recognized; the conversions and corresponding dimension differences
(source - goal) are shown in Fig. 5. If the difference matches the integer signature, the
conversion factor should be multiplied by the additional factor shown in the table. For
example, in converting kilograms (mass) to newtons (force),
?(convert 'kilogram 'newton)the dimension of kilogram is 8000 and the dimension of newton is 7961, so the difference
is and the proper multiplier is 9:80665 . Although these multipliers are
expressed in SI units, the conversion works for all unit systems.
2 Constants can be considered to have a dimension of unity, whose logarithmic representation is zero; such
units are sometimes referred to as "dimensionless" [18].
Conversion Vector Integer Factor
mass to weight [
weight to mass [
mass to energy [
energy to mass [
Figure
5: Dimension Conversions
6 Units in Programming Languages
Although most modern programming languages require specification of data types and
feature compile-time type checking, units generally are not included as part of types. This
is unfortunate, since use of incorrect units must be considered to be a type error. Some
commonly used procedures have implicit requirements on the units of their arguments; for
example, the system sin function may require that its argument be expressed in floating-point
radians. Karr and Loveman [15] advocated the inclusion of units in programming
languages; although the ATLAS language [5] incorporates units, to our knowledge no widely-used
programming language does so.
We have implemented the use of units in the GLISP language. GLISP ("Generic Lisp")
[19, 20] is a high-level language with abstract data types that is compiled into Lisp (or into
C by an additional translation step); the GLISP compiler is implemented in Common Lisp
[28]. GLISP has a data description language that can describe Lisp data structures or data
structures in other languages. GLISP is described only briefly here; for more detail, see [21]
and [19]. In the sections below, we describe both the language features needed to include
units in a programming language and the compiler operations necessary to perform unit
checking and conversion.
Karr and Loveman [15] suggested that units be implemented as reserved words that could
be used as multipliers in arithmetic expressions. Instead, we have implemented units as part
of data types. The implementation of units within a programming language involves several
different aspects:
1. inclusion of units as part of the type specification language
2. type checking of uses of data that have units
3. derivation of the units of the result of an arithmetic operation
4. coercion of data into appropriate units when necessary
5. a syntax for expressing numeric constants together with their units
Each of these aspects is described below.
6.1 Units as Part of Types
The types usually used to describe numeric data, such as integer, real, etc., describe only
the method of encoding numeric values. The units denoted by the numeric values are an
independent issue. Therefore, both the numeric type and unit must be specified as part of a
data type. We have adopted a simple syntax to specify the two together:
(units numeric-type unit )
For example, a floating-point number denoting a quantity of meters would have the type:
(units real meters)
type specification of this form may be used wherever a numeric type specification such as
real would otherwise be used.
Since the unit specification language allows constants to be included as part of a unit, it
is possible to specify unusual units that might be used by hardware devices. For example,
suppose that an optical shaft encoder provides the angular position of a shaft as an 8-bit
integer, so that a circle is broken into 256 equal parts. This unit can be expressed as:
(units integer (/ (* 2 pi radians) 256))
6.2 Results of Operations and Coercion
If unit checking and conversion are to be performed, it is necessary to determine the unit
of the result of an arithmetic operation. In general, it is necessary to create and perhaps
simplify new symbolic unit descriptions. There are several classes of operations, which are
handled differently.
The units produced by multiplication and division are easily derived by creating new
units that symbolically multiply or divide, respectively, the source units. For example, if a
quantity whose unit is (/ meter second) is multiplied by a quantity whose unit is second,
the resulting unit is:
(* (/ meter second) second)
This unit could be simplified to meter, but in most cases it is not necessary for a compiler
to perform such simplification: usually only the numeric conversion factor and dimension of
the unit are used, and these are not affected by redundancy in the unit specification.
Exponentiation to integer powers can be treated as multiplication or division. The
function sqrt is a special case: the dimension vector of the argument unit must contain
only multiples of 2, and it is necessary to produce an output unit that is "half" the input
unit; this may require unit simplification, as discussed below.
There are differences of opinion regarding coercion of types by a compiler. Some languages
allow coercion within an arithmetic expression; for example, if an integer and a real are
added, the integer will be converted to real prior to the addition. Other languages allow
coercion only across an assignment operator. The most strict languages have no coercion
and treat type differences as errors. The same issues and arguments can be raised regarding
automatic coercion of units, and the same implementation options are available. Note,
however, that if no coercion is allowed, the language must furnish some construct to allow
the programmer to invoke type conversion explicitly. We describe below how automatic
coercion can be implemented if it is desired.
In the case of addition, subtraction, comparison, and assignment operations, the units of
the two arguments must be the same if the operation is to be meaningful. If the units are
unequal, an attempt is made to convert the unit of the right-hand argument to the unit of
the left-hand argument. If a conversion factor f is not returned by the convert algorithm,
the operation is illegitimate (e.g., an attempt to add kilograms to meters), and an error
should be signaled by the compiler.
(gldefun t1 (x: (units real meters)
y: (units real kilograms))
glisp error detected by GLCOERCEUNITS in function
Cannot apply op + to METERS and KILOGRAMS
in expression: (X
If the conversion factor f is 1:0, no compiler action is needed; this can occur if the units are
equivalent but unsimplified. If the conversion factor is other than 1:0, a multiplication of
the right-hand operand by the conversion factor must be inserted by the compiler. The
following example illustrates how the GLISP compiler inserts such a conversion for an
addition operation:
(gldefun t2 (x: (units real meters)
y: (units real feet))
result type: (UNITS REAL METERS)
(LAMBDA (X Y)
In this example, the variable y, which has units feet, is added to the value of the variable
x, which has units meters. In this case, the compiler has inserted a multiplication by the
appropriate factor to convert feet to meters prior to the addition. The result type is the
type of the left-hand argument; this convention causes the type of a variable that is on the
left-hand side of an assignment statement to take precedence.
In some cases, it may be known that an argument of a procedure is required to have
certain units; in such cases, procedure arguments can be type-checked and coerced if needed.
For example, a library sin function may require an argument in radians; if the unit of the
existing data is as described above for the shaft encoder example, conversion will be required:
(gldefun t3 (x: (units integer (/ (* 2 pi radians)
result type: REAL
We have not described any language mechanism to allow the programmer to explicitly
convert units to a desired form. Such a conversion can be accomplished by assigning a value
to a variable that has the desired unit. The units used for intermediate results within
an arithmetic expression may be somewhat unusual, but will always be converted to a
programmer-specified unit upon assignment to a variable. Conversion of units may generate
extra multiplication operations; however, if the compiler performs constant folding [3], these
operations and their conversion factors can often be combined with other constants.
Human programmers usually write programs in such a way that intermediate results
have reasonable units and reasonable numeric values. When automatic coercion of units is
performed, it is possible that intermediate values may have unusual units and very large
or very small numeric values. It is possible that compiler-generated unit conversions might
cause a loss of accuracy compared to code written by humans that does the unit conversions
explicitly. For this reason, it is advisable that automatic coercion of units be used only
with floating-point representations with high accuracy, such as the 64-bit IEEE Standard
representation. While a human programmer who is aware of unit conversions can always
force the desired units to be used, a compiler that performs conversions automatically might
allow a careless programmer to overlook a potential accuracy problem.
We have found that inclusion of units in programs tends to be "all or nothing". That is,
if units are specified for some variables, then units need to be specified for other variables
that appear in expressions with those variables to avoid type errors.
6.3 Constants with Units
There may be a need to include physical constants, i.e., numbers with attached units, as
part of a program. We have adopted a syntax that allows a numeric constant and unit to
be packaged together:
'(q number unit )
The quoted q form indicates a quantity with units. The type of the result is the type of the
numeric constant combined with the specified unit. For example, the speed of light could be
'(q 2.99792458e8 (/ meter second))
6.4 Unit Simplification
There are some cases in which unit simplification is needed. For example, it is desirable to
simplify a unit that describes the result of a function. An algorithm for unit simplification
should be able to handle any combination of units, including mixtures of units from different
systems. The form of a unit that is considered to be "simplified" may depend on the needs of
the user: an electrical engineer might consider (* kilowatt hour) to be simplified, while a
physicist might prefer joule. We present below an algorithm that works well in simplifying
units for several commonly used systems of units; in addition, it allows some customization
by specifying new unit systems.
A unit system is a set of base units that are by convention taken as dimensionally
independent, and a set of derived units, formed from the base units by multiplication and
division, that are by convention used with the unit system. Other units that are used for
historical reasons may be associated with a unit system by defining them in terms of a
numeric conversion factor and a combination of base units. We have implemented three
unit systems: si (the Syst'eme International or SI system), cgs (centimeter-gram-second),
and english (slug-foot-second). For each commonly used kind of unit (e.g., length, force,
pressure, etc.) we define the standard unit for that kind of unit in each system (e.g., meter,
newton, and pascal, respectively, for the si system).
Our algorithm for symbolic simplification of a unit is as follows:
1. The desired system for the simplified result may be specified as a parameter. If it
is unspecified, the dominant system of the input unit is determined by counting the
number of occurrences of units associated with known systems; if a dominant system
cannot be determined, si is used.
2. The input unit is "flattened" so that it consists of a quotient of two products. At
the same time, input units are recursively expanded to their equivalents in terms of
base units (length, mass, time, etc. Units that are equivalent to numbers (have
dimensionality 0), such as mega or degree, are converted to numbers.
3. Any base units in the numerator and denominator product lists that are not in the goal
system are converted to the corresponding units in the goal system. The conversion
factors are accumulated.
4. The numerator and denominator product lists are sorted alphabetically.
5. Corresponding duplicate units are removed from the lists in a linear pass down the two
lists; this cancels units that appear in both numerator and denominator.
6. The standard units that are defined for the goal system are examined. If the multisets
represented by the numerator and denominator of the standard unit's expansion are
contained in the numerator and denominator, then the standard unit can be a factor of
the simplified unit. (The standard unit is also tested as an inverse factor.) The largest
standard unit factor (with size greater than one base unit) is chosen, and it replaces
its expansion in the unit that is being simplified. This process is continued until no
further replacements can be made; it must terminate, since each replacement makes
the unit expansion smaller.
As an example, we show how the algorithm simplifies the unit expression:
?(simplifyunit '(/ joule watt))
The units joule and watt are defined in terms of base units:
(* SECOND SECOND))
(* SECOND SECOND SECOND))
The quotient of these two units is flattened as a quotient of two products:
(/ (* METER METER KILOGRAM SECOND SECOND SECOND)
(* SECOND SECOND METER METER KILOGRAM))
The two product lists are sorted:
(KILOGRAM METER METER SECOND SECOND SECOND)
(KILOGRAM METER METER SECOND SECOND)
Duplicated units in the two sorted lists are removed:
In this case, the result is just a single unit: SECOND.
This algorithm has the advantage of being universal: by completely breaking its input
down to base units, canceling any duplicates, and then making a new unit from the result, it
can accept any combination of units as input. It is also deterministic: it produces the same
result for any way of stating the same unit. The algorithm is also reasonably fast. Since the
algorithm works with a definition of a unit system in terms of a set of preferred units, it is
possible for a user to define a modified unit system in which the user specifies the units that
are preferred as the result of simplification.
Some examples of unit simplification are shown below.
?(simplifyunit '(/ meter foot))?(simplifyunit '(/ joule watt))
?(simplifyunit '(/ joule horsepower))
?(simplifyunit '(/ (* kilogram meter)
(* second second)))
NEWTON
?(simplifyunit 'atm)
(* 101325.0 PASCAL)
?(simplifyunit 'atm 'english)
(* 14.695948775721259 POUNDS-PER-SQUARE-INCH)
?(simplifyunit '(/ (* amp second) volt))
FARAD
?(simplifyunit '(/ (* newton meter)
(* ampere second)))
?(simplifyunit '(/ (* volt volt)
(* lbf (/ (* atto parsec)
(* 26250.801011041247 OHM)
It was mentioned above that determining the type returned by the sqrt function requires
making a unit that is "half" the input unit; for example, if the input unit is (* meter
meter), the output unit would be meter. The process for determining the unit returned by
sqrt is the same as the process of unit simplification described above, except for the last
step. After the initial steps of simplification, the input unit will be represented by flat, sorted
numerator and denominator lists containing base units of the same unit system, and possibly
a numeric factor. Both lists must consist of adjacent pairs of identical units; otherwise, the
input unit is in error. The output unit is determined by collecting every other member of
the input lists (checking to make sure the alternate member is identical) and making a new
unit from these lists and the square root of the numeric factor.
6.5 Units and Generic Procedures
We have done research on the reuse of generic procedures [22] [23]; a generic procedure is
one that can be used for a variety of data types. When the arguments of a generic procedure
include units, automatic checking and conversion of units are essential for correct reuse.
In the GLISP language [19] [21], it is not necessary to declare the type of every variable.
When a variable is assigned a value, type inference is used to determine the type of the value,
and the variable's type becomes the type of the value assigned to it. (Assignment of values
of different types to the same variable will cause an error to be reported by the compiler.)
This feature is useful in writing generic procedures: it is only necessary to specify the main
types that are used (often just the types of input parameters); other types can be derived
from those types. Because the types of local variables are specified indirectly, a single generic
procedure can be specialized for a variety of input types. This is especially useful in the case
of types that include units.
Figure
Calculation of Position of Aircraft from Radar Data
We have developed a system, called VIP [24] (for View Interactive Programming) that
generates programs from graphical connections of physical and mathematical models. A
program is generated from equations associated with the physical models. Typically, only
the types and units of inputs and outputs are specified; the units and types of intermediate
values are derived by type and unit inference. This system is illustrated in the diagram
shown in Fig. 6. The problem used as an example is a small but realistic numerical problem:
the calculation of the position of an aircraft from data provided by an air search radar. We
assume that the radar provides as input the time difference between transmission and return
of the radar pulse, as well as the angle of the radar antenna at the time the return pulse is
detected. When the radar illuminates the aircraft, we assume that the aircraft transponder
transmits the identity of the aircraft and its altitude. The position and altitude of the
radar station are assumed to be known. These items comprise the input data provided to
the program. We assume that the units of measurement of the input data are externally
specified (e.g., by hardware devices), so that the program is required to use the given units.
In creating the program, the user of VIP is able to select from a variety of predefined
physical and mathematical models, constant values, and operators. Initially, the VIP display
consists of a set of boxes representing the input data, and an output box. In our example, the
user first decides to model the travel of the radar beam as an instance of uniform-motion.
The user selects the Physics command, then kinematics from the Physics menu, then
uniform-motion from the kinematics menu. The input value TIME-DIFF is connected to
the time button t of the motion. Next, the user selects Constant and obtains the constant
for the speed of light, denoted C, and connects it to the velocity v of the motion. The distance
d of the motion then gives the total (out-and-back) distance from the radar to the aircraft; by
dividing this distance by 2, the one-way distance is obtained. This distance is connected to
the hypotenuse of a Geometry object, right-triangle. The difference between the altitude
of the aircraft and the altitude of the radar is connected to the y of this triangle. The x of
this triangle is then the distance to a point on the ground directly underneath the aircraft.
This distance and the angle of the radar give a range and bearing to the aircraft from the
by connecting these to another right triangle, x and y offsets of the aircraft from the
radar are obtained. These are collected to form a relative position vector, RELPOS, which is
added to the radar's UTM (universal transverse mercator) coordinates to form the output.
While the process described above is rather lengthy when described in words, the time
taken by an experienced user to create this program using VIP was less than two minutes.
Note that this problem involves several instances of conversion of units of measurement, a
physical constant, and algebraic manipulation of several equations; all of these were hidden
and performed automatically. Fig. 7 shows the GLISP program produced by VIP. Fig. 8
shows the program after it has been compiled and mechanically translated into C.
In this example, unit conversion is a major part of the application program. However, the
user only needed to specify the input units; all unit conversion and checking was performed
automatically by the compiler, so that this source of programming difficulty and potential
error was eliminated.
(LAMBDA (TIME-DIFF: (UNITS INTEGER (* 100 NANOSECOND))
RADAR-UTM: UTM-CVECTOR)
(D2 := (* '(Q 2.997925E8 (/ M S)) TIME-DIFF))
(X4 := (* X3 (COS RADAR-ANGLE)))
(RELPOS :=
Figure
7: GLISP Program Generated by VIP for Radar Problem
7 Conclusions and Future Work
We have described algorithms for conversion of units, for compiler checking of units used in
arithmetic operations and for coercing units when necessary, and for symbolic simplification
of combinations of units. The unit conversion algorithms are as simple as possible: they
require only one multiply or divide per unit for conversion, and one add or subtract per
unit for dimension checking. These algorithms have been implemented in a compiler that
allows units as part of data type specifications and that performs automatic unit checking
and conversion.
Unit conversion is a problem that will not go away, even if the United States converts
to the SI system. Workers in particular fields will continue to use units such as parsec or
micron rather than meter, both because of tradition and because such units are convenient
in size for the measurements typically used in practice. The compiler algorithms that we have
described are relatively easy to implement, so that units could be incorporated into a variety
of programming languages. These algorithms make it feasible to implement essentially all
known units of measurement, so that users may use any units they find convenient. We agree
with Karr and Loveman [15] that scientific programming languages should support the use
of units; we hope that presentation of these algorithms will encourage such a trend.
The ARPA Knowledge-Sharing Project [17] focuses on combining data from distributed
databases and knowledge bases. The algorithms described in this paper can be used for
conversion when these databases use different units.
We have included money as a dimension, since it is often important to convert units
such as (/ dollar kilowatt-hour) that include monetary units. Of course, the conversion
CUTM *tqc (time-diff, aircraft-altitude, radar-altitude, radar-angle,
long time-diff, aircraft-altitude, radar-altitude, radar-angle;
long out1;
float d1, out2, x1, y1, x2;
CUTM *relpos, *glvar1621;
relpos-?east
glvar1621-?east relpos-?east
return output;
Figure
8: Radar Program Compiled and Converted to C
factors for different currencies are not constant; however, by updating the conversion factors
periodically, useful approximate conversions can be obtained.
Our algorithms do not handle units that include additive constants; the common examples
of such units are the Celsius and Fahrenheit temperature scales. Other features of the GLISP
language can be used to handle these cases. Note that it is only possible to convert from
a pure temperature unit to another temperature unit; it would be incorrect to multiply a
non-absolute temperature by another unit. The kelvin and the degree Rankine are linearly
related and can be converted by our algorithms.
Ruey-Juin Chang implemented an Analyst's Workbench [7] to aid in making analytical
models. She included substance as an additional part of a quantity, along with numeric
quantity and unit; for example, "10 gallons of gasoline" has gasoline as the substance.
Engineering and scientific calculations often involve conversions that depend on the substance
as well as the quantity and units. For example, "10 gallons of gasoline" can be converted
into volume (10 gallons), mass, weight, energy, money, or energy equivalent in kilograms
of anthracite coal. The algorithms presented in this paper might usefully be extended to
include these kinds of conversions as well.
8 Software Available
The unit conversion software described in this paper is available free by anonymous ftp
from ftp.cs.utexas.edu/pub/novak/units/ . It is written in Common Lisp. An on-line
demonstration of the software, which requires a workstation running X windows, is available
on the World Wide Web via http://www.cs.utexas.edu/users/novak .
Acknowledgment
I thank the anonymous reviewers for their suggestions for improving this paper.
--R
Handbook of Mathematical Functions
Physical Measurements and the International (SI) System of Units
Compilers: Principles
American National Standard for Metric Practice
IEEE Standard C/ATLAS
Dimensional Analysis
"Cliche-Based Modeling for Expert Problem-Solving Systems"
"A Package for Handling Units of Measure in Lisp"
"An Ontology for Engineering Mathematics"
"Units of Measure as a Data Attribute"
"A Value Transmission Method for Abstract Data Types"
"An Ada Package for Dimensional Analysis"
Conversion Tables of Units in Science and Engineering
Quantities and Units
"Incorporation of Units into Programming Languages"
"IDL: Sharing Intermediate Representations"
"Enabling Technology for Knowledge Sharing"
"The International System of Units (SI)"
"GLISP: A LISP-Based Programming System With Data Abstraction"
"Negotiated Interfaces for Software Reuse"
"Software Reuse through View Type Clusters"
"Software Reuse by Specialization of Generic Procedures through Views"
"Generating Programs from Connections of Physical Models"
Fundamental Measures and Constants for Science and Technology
"Writing Applications for Uniform Operation on a Mainframe or PC: A Metric Conversion Program"
Conversion Tables for SI Metrication
A Metrication Handbook for Engineers
--TR
--CTR
Sharon L. Greene , Tracy Lou , Paul Matchen, Dynamic dimensional feedback: an interface aid to business rule creation, CHI '05 extended abstracts on Human factors in computing systems, April 02-07, 2005, Portland, OR, USA
Tarun Rathnam , Christiaan J. J. Paredis, Developing federation object models using ontologies, Proceedings of the 36th conference on Winter simulation, December 05-08, 2004, Washington, D.C.
Raya Khanin, Dimensional analysis in computer algebra, Proceedings of the 2001 international symposium on Symbolic and algebraic computation, p.201-208, July 2001, London, Ontario, Canada
Tudor Antoniu , Paul A. Steckler , Shriram Krishnamurthi , Erich Neuwirth , Matthias Felleisen, Validating the Unit Correctness of Spreadsheet Programs, Proceedings of the 26th International Conference on Software Engineering, p.439-448, May 23-28, 2004
Jie Liu , Elaine Cheong , Feng Zhao, Semantics-based optimization across uncoordinated tasks in networked embedded systems, Proceedings of the 5th ACM international conference on Embedded software, September 18-22, 2005, Jersey City, NJ, USA
Jiang , Zhendong Su, Osprey: a practical type system for validating dimensional unit correctness of C programs, Proceeding of the 28th international conference on Software engineering, May 20-28, 2006, Shanghai, China
Gordon S. Novak Jr., Creation of Views for Reuse of Software with Different Data Representations, IEEE Transactions on Software Engineering, v.21 n.12, p.993-1005, December 1995
Gordon S. Novak, Jr., Software Reuse by Specialization of Generic Procedures through Views, IEEE Transactions on Software Engineering, v.23 n.7, p.401-417, July 1997 | dimensional analysis;unit of measurement;data type;unit conversion |
631185 | TLA in Pictures. | Predicate-action diagrams, which are similar to standard state-transition diagrams, are precisely defined as formulas of TLA (the Temporal Logic of Actions). We explain how these diagrams can be used to describe aspects of a specificationand those descriptions then proved correcteven when the complete specification cannot be written as a diagram. We also use the diagrams to illustrate proofs. | Introduction
Pictures aid understanding. A simple flowchart is easier
to understand than the equivalent programming-language
text. However, complex pictures are confusing. A large,
spaghetti-like flowchart is harder to understand than a
properly structured program text.
Pictures are inadequate for specifying complex systems,
but they can help us understand particular aspects of a
system. For a picture to provide more than an informal
comment, there must be a formal connection between the
complete specification and the picture. The assertion that
the picture is a correct description of (some aspect of) the
system must be a precise mathematical statement.
We use TLA (the Temporal Logic of Actions) to specify
systems. In TLA, a specification is a logical formula describing
all possible correct behaviors of the system. As an
aid to understanding TLA formulas, we introduce here a
type of picture called a predicate-action diagram. These diagrams
are similar to the various kinds of state-transition
diagrams that have been used for years to describe sys-
tems, starting with Mealy and Moore machines [1], [2]. We
relate these pictures to TLA specifications by interpreting
a predicate-action diagram as a TLA formula. A diagram
denoting formula D is a correct description of a system
with specification S iff (if and only if) S implies D. We
therefore provide a precise statement of what it means for
a diagram to describe a specification.
We use predicate-action diagrams in three ways that we
believe are new for a precisely defined formal notation:
ffl To describe aspects of a specification even when it is
not feasible to write the complete specification as a
diagram.
ffl To draw different diagrams that provide complementary
views of the same system.
ffl To illustrate formal correctness proofs.
Section II is a brief review of TLA; a more leisurely introduction
to TLA appears in [3]. Section III describes
predicate-action diagrams, using an n-input Muller C-element
as an example. It shows how diagrams are used to
describe aspects of a complete specification, and to provide
complementary views of a system. Section IV gives another
example of how predicate-action diagrams are used to describe
a system, and shows how they are used to illustrate
a proof.
II. TLA
We now describe the syntax and semantics of TLA. The
description is illustrated with the formulas defined in Figure
1. (The symbol \Delta
means equals by definition.)
We assume an infinite set of variables (such as x and
y) and a class of semantic values. Our variables are the
flexible variables of temporal logic, which are analogous to
variables in a programming language. TLA also includes
the rigid variables of predicate logic, which are analogous
to constant parameters of a program, but we ignore them
here. The class of values includes numbers, strings, sets,
and functions.
A state is an assignment of values to variables. A behavior
is an infinite sequence of states. Semantically, a TLA
formula is true or false of a behavior. Syntactically, TLA
formulas are built up from state functions using Boolean
operators (:, -, ) [implication], and j [equivalence])
and the operators 0 and 2, as described below. TLA also
has a hiding operator 999 999, which we do not use here.
A state function is a nonBoolean expression built from
variables, constants, and constant operators. Semantically,
it assigns a value to each state-for example assigns
to state s one plus the value that s assigns to the variable
x. A state predicate (often called just a predicate) is
a Boolean expression built from variables, constants, and
constant operators such as +. Semantically, it is true or
false for a state-for example the predicate Init \Phi is true of
state s iff s assigns the value zero to both x and y.
An action is a Boolean expression containing primed and
unprimed variables. Semantically, an action is true or false
of a pair of states, with primed variables referring to the
second state-for example, action M 1 is true for hs; ti iff
the value that state t assigns to x equals one plus the value
that state s assigns to x, and the values assigned to y by
states s and t are equal. A pair of states satisfying an
action A is called an A step. Thus, an M 1 step is one that
increments x by one and leaves y unchanged.
If f is a state function or state predicate, we write f 0
for the expression obtained by priming all the variables of
f . For example
\Phi equals
0). For an action A and a state function v,
Init \Phi
Fig. 1. The TLA formula \Phi describing a simple program that repeatedly
increments x or y.
we define [A] v to equal A- (v so a [A] v step is either
an A step or a step that leaves the value of v unchanged.
Thus, a [M 1
step is one that increments x by one and
leaves y unchanged, or else leaves the ordered pair hx; yi
unchanged. Since a tuple is unchanged iff each component
is unchanged, a [M 1
step is one that increments x by
one and leaves y unchanged, or else leaves both x and y
unchanged. We define hAi v to equal A - (v 0 6= v), so an
step is an M 1 step that changes x or y. Since an
unchanged, an hM 1 i hx;yi
step is a step
that increments x by 1, changes the value of x, and leaves
y unchanged.
We say that an action A is enabled in state s iff there
exists a state t such that hs; ti is an A step. For example,
M 1 is enabled iff it is possible to take a step that increments
x by one, changes x, and leaves y unchanged. Since
natural number x, action hM 1 i hx;yi
is
enabled in any state in which x is a natural number. If
is not enabled in a state
in which x equals 1.
A TLA formula is true or false of a behavior. A predicate
is true of a behavior iff it is true of the first state. An action
is true of a behavior iff it is true of the first pair of states.
As usual in temporal logic, if F is a formula then 2F is
the formula meaning that F is always true. Thus, 2Init OE
is true of a behavior iff x and y equal zero for every state in
the behavior. The formula 2[M] hx;yi
is true of a behavior
iff each step (pair of successive states) of the behavior is a
step.
Using 2 and "enabled" predicates, we can define fairness
operators WF and SF. The
asserts of a behavior that there are infinitely many hAi v
steps, or there are infinitely many states in which hAi v is
not enabled. In other words, WF v (A) asserts that if hAi v
becomes enabled forever, then infinitely many hAi v steps
occur. The strong fairness formula SF v (A) asserts that either
there are infinitely many hAi v steps, or there are only
finitely many states in which hAi v is enabled. In other
words, asserts that if hAi v is enabled infinitely of-
ten, then infinitely many hAi v steps occur.
The standard form of a TLA specification is Init -
Init is a predicate, N is an action, v
is a state function, and L is a conjunction of fairness con-
ditions. This formula asserts of a behavior that (i) Init is
true for the initial state, (ii) every step of the behavior is an
N step or leaves v unchanged, and (iii) L holds. Formula
\Phi of
Figure
1 is in this form, asserting that (i) initially x
and y both equal zero, (ii) every step either increments x
by one and leaves y unchanged, increments y by one and
leaves x unchanged, or leaves both x and y unchanged, and
(iii) the fairness condition WF hx; yi
holds. Formula WF hx; yi (M 1 ) asserts that there are infinitely
many
steps or hM 1 i hx;yi
is infinitely
often not enabled. Since (i) and (ii) imply that x is
always a natural number, hM 1 i hx;yi
is always enabled.
Hence, WF hx; yi (M 1 ) implies that there are infinitely many
steps, so x is incremented infinitely often. Simi-
larly, WF hx; yi (M 2 ) implies that y is incremented infinitely
r
e
l
e
e
out
Fig. 2. A Muller C-element.
often. Putting this all together, we see that \Phi is true
of a behavior iff (i) x and y are initially zero, (ii) every
step increments either x or y by one and leaves the other
unchanged or else leaves both x and y unchanged, and
(iii) both x and y are incremented infinitely many times.
The formula Init - 2[N ] v is a safety property [4]. It
describes what steps are allowed, but it does not require
anything to happen. (The formula is satisfied by a behavior
satisfying the initial condition in which no variables
ever change.) Fairness conditions are used to specify that
something must happen.
III. Predicate-Action Diagrams
A. An Example
We take as an example a Muller C-element [5]. This is a
circuit with n binary inputs one binary
output out , as shown in Figure 2. As the figure indicates,
we are considering the closed system consisting of the C-element
together with its environment. Initially, all the
inputs and the output are equal. The output becomes 0
when all the inputs are 0, and it becomes 1 when all the
inputs are 1. After an input changes, it must remain stable
until the output changes.
The behavior of a 2-input C-element and its environment
is described by the predicate-action diagram of Figure 3(a),
where C is defined by
The short arrows, with no originating node, identify the
nodes labeled C(0; 0; 0) and C(1; 1; 1) as initial nodes.
They indicate that the C-element starts in a state satisfying
1). The arrows connecting nodes
indicate possible state transitions. For example, from a
state satisfying C(1; 1; 1), it is possible for the system
to go to a state satisfying either C(0; 1; 1) or C(1; 0; 1).
More precisely, these arrows indicate all steps in which
the triple hin[1]; in[2]; outi changes-that is, transitions in
which at least one of in[1], in[2], and out changes. Steps
that change other variables-for example, variables representing
circuit elements inside the environment-but leave
hin[1]; in[2]; outi unchanged are also possible.
The predicate-action diagram of Figure 3(a) looks like a
standard state-transition diagram. However, we interpret
it formally not as a conventional state machine, but as the
(a) A predicate-action diagram.
\Delta-
AU
A
AU
\Delta-
\Delta-
AU
A
AU
\Delta-
@ @R @ @R
(b) The corresponding TLA formula.
Fig. 3. Predicate-Action diagram of hin[1]; in[2]; outi for a 2-input C-element, and the corresponding TLA formula.
TLA formula of Figure 3(b). 1 This formula has the form
Init is a state predicate and there is
one conjunct F o for each node o. The predicate Init is
Each F describes the possible state
changes starting from a state described by node o. For
example, the formula F o for the node labeled C(1; 1; 0) is
A predicate-action diagram represents a safety property; it
does not include any fairness conditions.
Figure
3(a) is a reasonable way to describe a 2-input
C-element. However, the corresponding diagram for a 3-
input C-element would be quite complicated; and there is
no way to draw such a diagram for an n-input circuit. The
general specification is written directly as a TLA formula
in
Figure
4. The array of inputs is represented formally
by a variable in whose value is a function with domain
brackets denote function applica-
tion. (Formally, n is a rigid variable-one whose value is
constant throughout a behavior.) We introduce two pieces
of notation for representing functions:
denotes the function f with domain S
such that f [i] equals e(i) for every i in S.
ffl [f except denotes the function g that is the
same as f except that g[i] equals e.
The formulas defined in Figure 4 have the following interpretation
Init C A state predicate asserting that out is either 0 or
1, and that in is the function with domain ng
such that in[i] equals out for all i in its domain.
Input(i) An action that is enabled iff in [i] equals out .
It complements in[i], leaves in [j] unchanged for j 6= i,
and leaves out unchanged. (The symbol i is a param-
eter.)
Output An action that is enabled iff all the in[i] are different
from out . It complements out and leaves in
unchanged.
list of formulas bulleted by - or - denotes their conjunction or
disjunction; - and - are also used as ordinary infix operators.
Init C
ng 7! out ]
- in
ng : in [i] 6= out
- in
Next ng : Input(i)
-WF hin ;outi (Output)
Fig. 4. A TLA specification of an n-input C-element.
Next An action that is the disjunction of Output and all
the Input (i) actions, for ng. Thus, a Next
step is either an Output step or an Input(i) step for
some input line i.
\Pi C A temporal formula that is the specification of the
C-element (together with its environment). It asserts
that (i) Init C holds initially, (ii) every step is either
a Next step or else leaves hin ; outi unchanged, and
(iii) Output cannot be enabled forever without an Output
step occurring. The fairness condition (iii) requires
the output to change if all the inputs have; inputs
are not required to change. (Since predicate-action
diagrams describe only safety properties, the fairness
condition is irrelevant to our explanation of the dia-
grams.)
The specification \Pi C is short and precise. However, it is
not as reader-friendly as a predicate-action diagram. We
therefore use diagrams to help explain the specification,
beginning with the predicate-action diagram of Figure 5.
It is a diagram of the state function hin [i]; outi, meaning
that it describes transitions that change hin[i]; outi. It is
a diagram for the formula \Pi C , meaning that it represents
a formula that is implied by \Pi C . The diagram shows the
synchronization between the C-element's ith input and its
output.
We can draw many different predicate-action diagrams
oe
ae
oe
ae
oe
ae
oe
ae
\Omega \Omega \Omega OE J
\Omega \Omega \Omega AE
Fig. 5. A predicate-action diagram of hin[i]; outi for the specification
\Pi C of an n-input C-element, where 1
in
Y
Fig. 6. Another predicate-actiondiagram of hin[i]; outi for \Pi C , where
for the same specification. Figure 6 shows another diagram
of hin [i]; outi for \Pi C . It is simpler than the one in Figure 5,
but it contains less information. It does not indicate that
the values of in[i] and out are always 0 or 1, and it does
not show which variable is changed by each transition. The
latter information is added in the diagram of Figure 7(a),
where each transition is labeled with an action. The label
Input (i) on the left-to-right arrow indicates that a transition
from a state satisfying in out to a state satisfying
in[i] 6= out is an Input(i) step. This diagram represents
the TLA formula of Figure 7(b).
Even more information is conveyed by a predicate-action
diagram of hin; outi, which also shows transitions that leave
in[i] and out unchanged but change in[j] for some j 6= i.
Such a diagram is drawn in Figure 8(a). Figure 8(b) gives
the corresponding TLA formula.
There are innumerable predicate-action diagrams that
can be drawn for a specification. Figure 9 shows yet another
diagram for the C-element specification \Pi C . Since
we are not relying on these diagrams as our specification,
but simply to help explain the specification, we can show as
much or as little information in them as we wish. We can
(a) A predicate-action diagram of hin[i]; outi.
in
Y
Output
(b) The corresponding TLA formula.
Fig. 7. A more informative predicate-action diagram of hin[i]; outi
for \Pi C , and the corresponding TLA formula.
(a) A predicate-action diagram of hin; outi.
in
- in [i] 6= out
Y
Output
U
U
(b) The corresponding TLA formula.
- in
Fig. 8. A predicate-action diagram of hin; outi for \Pi C , and the
corresponding TLA formula, where 1
in
- in [i] 6= out
Y
in
out
U U
Fig. 9. Yet another predicate-action diagram of hin; outi for \Pi C .
draw multiple diagrams to illustrate different aspects of a
system. Actual specifications are written as TLA formulas,
which are much more expressive than pictures.
B. A Formal Treatment
B.1 Definition
We first define precisely the TLA formula represented by
a diagram. Formally, a predicate-action diagram consists
of a directed graph, with a subset of the nodes identified as
initial nodes, where each node is labeled by a state predicate
and each edge is labeled by an action. We assume
a given diagram of a state function v and introduce the
following notation.
N The set of nodes.
I The set of initial nodes.
E(n) The set of edges originating at node n.
d(e) The destination node of edge e.
Pn The predicate labeling node n.
e The action labeling edge e.
The formula \Delta represented by the diagram is defined as
follows.
Init \Delta
An
When no explicit label is attached to an edge e, we take
e to be true. When no set of initial nodes is explicitly
indicated, we take I to be N . With the usual convention
for quantification over an empty set, An is defined to equal
false if there are no edges originating at node n.
B.2 Another Interpretation
Another possible interpretation of the predicate-action
diagram is the formula b
\Delta, defined by
This is perhaps a more obvious interpretation-especially
if the diagram is viewed as a description of a next-state
relation. We now show that \Delta always implies b
\Delta, and that
the converse implication holds if the predicates labeling the
nodes are disjoint.
\Delta.
simple invariance proof, using rule INV1 of [3,
Figure
5, page 888], shows that \Delta implies 2(9 n
We then have:
conjunction and 8, and
is equivalent to
[ by predicate logic, since B ) C implies 2[B]v
holds for all m, n in N with m 6= n, then
implies \Delta.
propositional logic, the hypothesis implies
The result then follows from simple temporal reasoning,
essentially by the reverse of the string of equivalences and
implication used to prove (A).
We usually label the nodes of a predicate-action diagram
with disjoint predicates, in which case (A) and (B) imply
that the interpretations \Delta and b
are equivalent. Diagrams
with nondisjoint node labels may occasionally be useful; \Delta
is the more convenient interpretation of such diagrams.
C. Proving a Predicate-Action Diagram
Saying that a diagram is a predicate-action diagram for
a specification \Pi asserts that \Pi implies the formula \Delta represented
by the diagram. Formula \Pi will usually have the
form Init \Pi - 2[M] u - L, where L is a fairness condition.
To
prove \Pi ) \Delta, we prove:
1. Init \Pi
2. Init \Pi - 2[M] u for each node n.
The first condition is an assertion about predicates; it is
generally easy to prove. To prove the second condition,
one usually finds an invariant Inv such that Init \Pi - 2[M] u
implies 2Inv , so \Pi implies 2[M- Inv ] u . The second condition
is then proved by showing that [M - Inv ] u implies
for each node n. Usually, u and v are tuples
and every component of v is a component of u, so
v. In this case, one need show only
that M-Inv implies [P n for each n. By definition
of An , this means proving
for each node n. This formula asserts that an M step
that starts with Pn and Inv true and changes v is an Em
step that ends in a state satisfying P d(m) , for some edge m
originating at node n.
IV. Illustrating Proofs
In TLA, there is no distinction between a specification
and a property; they are both formulas. Verification means
proving that one formula implies another. A practical, relatively
complete set of rules for proving such implications
is described in [3]. We show here how predicate-action diagrams
can be used to illustrate these proofs. We take as our
example the same one treated in [3], that the specification
\Psi defined in Section IV-A below implies the specification
\Phi defined in Section II above.
A. Another Specification
We define a TLA formula \Psi describing a program with
two processes, each of which repeatedly loops through the
sequence of operations P (sem); increment ; V (sem), where
one process increments x by one and the other increments
y by one. Here, P (sem) and V (sem) denote the usual operations
on a semaphore sem. To describe this program
formally, we introduce a variable pc that indicates the control
state. Each process has three control points, which we
call "a", "b", and "g". (Quotes indicate string values.)
We motivate the definition of \Psi with the three predicate-
action diagrams for \Psi in Figure 10. In these diagrams, the
predicate PC (p; q) asserts that control is at p in process 1
and at q in process 2. Figure 10(a) shows how the control
state changes when the P (sem), V (sem), and increment
actions are performed. Variables other than pc not mentioned
in an edge label are left unchanged by the indicated
steps-for example, steps described by the edge labeled
leave y and sem unchanged-but this is not
asserted by the diagram. The next-state action N is written
as the disjunction N 1 - N 2 of the next-state actions
of each process; and each N i is written as the disjunction
Figure 10(b) illustrates this decomposition.
Finally, the predicate-action diagram of Figure 10(c) describes
how the semaphore variable sem changes.
To write the specification \Psi, we let pc be a function with
domain f1; 2g, with pc[i] indicating where control resides
in process i. The formula PC (p; q) can then be defined by
(a)
\Phi \Phi \Phi \Phi \Phi*
\Upsilon
R
(b)
\Phi \Phi \Phi \Phi \Phi*
\Upsilon
R
(c)
Y
U
Fig. 10. Three predicate-action diagrams of hx;
The semaphore actions P and V are defined by
sem
sem
sem
Missing from Figure 10 are a specification of the initial values
of x and y, which we take to be zero, and a fairness con-
dition. One could augment predicate-action diagrams with
some notation for indicating fairness conditions. However,
the conditions that are easy to represent with a diagram
are not expressive enough to describe the variety of fairness
requirements that arise in practice. The WF and SF formu-
las, which are expressive enough, are not easy to represent
graphically. So, we have not attempted to represent fairness
in our diagrams. We take as the fairness condition for
our specification \Psi strong fairness on the next-state action
N i of each process. The complete definition of \Psi appears
in
Figure
11.
B. An Illustrated Proof
The proof of \Psi ) \Phi is broken into the proof of three
conditions:
1. Init \Psi ) Init \Phi
2. Init \Psi - 2[N
3. \Psi ) WF hx;yi
We illustrate the proofs of conditions 2 and 3 with the
predicate-action diagram of hx; for \Psi in Figure
12, where Q is defined by
and Nat is the set of natural numbers.
First, we must show that the diagram in Figure 12
is a predicate-action diagram for \Psi. This is easy using
the definition in Section III-B.1; no invariant is needed.
For example, the condition to be proved for the node labeled
is that an N step that starts with
true is an M 1 step (one that increments x and
leaves y unchanged) that makes Q 0 ("g"; "a") true. This
follows easily from the definitions of Q and N , since an N
step starting with PC ("b"; "a") true must be a fi 1 step.
To prove condition 2, it suffices to prove that every step
allowed by the diagram of Figure 12 is a [M] hx;yi
step. The
steps not shown explicitly by the diagram are ones that
leave w unchanged. Such steps leave hx; yi unchanged, so
they are [M] hx;yi
steps. The actions labeling all the edges
of the diagram imply [M] hx;yi
, so all the steps shown ex-
7Init \Psi
sem
sem
Fig. 11. The specification \Psi.
\Phi \Phi \Phi \Phi \Phi*
\Upsilon
R
Fig. 12. Another predicate-action diagram of hx;
plicitly by the diagram are also [M] hx;yi
steps. This proves
condition 2.
We now sketch the proof of condition 3. To prove
WF hx;yi
suffices to show that infinitely many
steps occur. We first observe that each of the
predicates labeling a node in the diagram implies that either
enabled. The fairness condition
of \Psi then implies that a behavior cannot remain forever
at any node, but must keep moving through the diagram.
Hence, the behavior must infinitely often pass through
the node. The predicate Q 1 ("a"; "a") implies
that both hN 1 i w and hN 2 i w are enabled. Hence, the fairness
condition implies that infinitely
many hN 1 i w steps and infinitely many hN 2 i w steps must
occur. Action hN 1 i w is enabled only in the three nodes of
the top loop. Taking infinitely many hN 1 i w steps is therefore
possible only by going around the top loop infinitely
many times, which implies that infinitely many M 1 steps
occur, each starting in a state with Q 0 ("b"; "a") true. Since
Nat, an M 1 step starting with
so it is an hM 1 i hx;yi
step.
Hence, infinitely many hM 1 i hx;yi
steps occur. Similarly,
taking infinitely many hN 2 i w steps implies that infinitely
many
steps occur. This completes the proof of
condition 3.
Using the predicate-action diagram does not simplify the
proof. If we were to make the argument given above rigor-
ous, we would go through precisely the same steps as in the
proof described in [3]. However, the diagram does allow us
to visualize the proof, which can help us to understand it.
V. Conclusion
We have described three uses of diagrams that we believe
are new for diagrams with a precise formal semantics:
ffl To describe particular aspects of a complex specification
with a simple diagram. An n-input C-element
cannot be specified with a simple picture. However,
we explained the specification with diagrams describing
the synchronization between the output and each
individual input.
ffl To provide complementary views of the same system.
Diagrams (b) and (c) of Figure 10 look quite different,
but they are diagrams for the same specification.
ffl To illustrate proofs. The disjunction of the predicates
labeling the nodes in Figure 12 equals the invariant I
of the proof in Section 7.2 of [3]. The diagram provides
a graphical representation of the invariance proof.
TLA differs from traditional specification methods in two
important ways. First, all TLA specifications are interpreted
over the same set of states. Instead of assigning
values just to the variables that appear in the specifica-
tion, a state assigns values to all of the infinite number
of variables that can appear in any specification. Second,
TLA specifications are invariant under stuttering. A formula
can neither require nor rule out finite sequences of
steps that do not change any variables mentioned in the
formula. (The state-function subscripts in TLA formulas
are there to guarantee invariance under stuttering.)
These two differences lead to two major differences between
traditional state-transition diagrams and predicate-
action diagrams. In traditional diagrams, each node represents
a single state. Because states in TLA assign values to
an infinite number of variables, it is impossible to describe
a single state with a formula. Any formula can specify
the values of only a finite number of variables. To draw
diagrams of TLA formulas, we let each node represent a
predicate, which describes a set of states. In traditional di-
agrams, every possible state change is indicated by an edge.
Because TLA formulas are invariant under stuttering, we
draw diagrams of particular state functions-usually tuples
of variables.
TLA differs from most specification methods because it
is a logic. It uses simple logical operations like implication
and conjunction instead of more complicated automata-based
notions of simulation and composition [6]. Everything
we have done with predicate-action diagrams can be
done with state-transition diagrams in any purely state-based
formalism. However, conventional formalisms must
use some notion of homomorphism between diagrams to
describe what is expressed in TLA as logical implication.
Most formalisms employing state-transition diagrams are
not purely state-based, but use both states and events.
Nodes represent states, and edges describe input and output
events. The meaning of a diagram is the sequence of
events it allows; the states are effectively hidden. In TLA,
there are only states, not events. Systems are described in
terms of changes to interface variables rather than in terms
of interface events. Variables describing the internal state
are hidden with the existential quantifier 999 999 described in [3].
Changes to any variable, whether internal or interface, can
be indicated by node labels or edge labels. Hence, a purely
state-based approach like TLA allows more flexibility in
how diagrams are drawn than a method based on states
and events.
--R
"A method for synthesizing sequential circuits,"
"Gedanken-experiments on sequential machines,"
"The temporal logic of actions,"
"Defining liveness,"
Introduction to VLSI Systems
"Conjoining specifications,"
--TR
--CTR
M. Lusini , E. Vicario, Engineering the usability of visual formalisms: a case study in real time logics, Proceedings of the working conference on Advanced visual interfaces, May 24-27, 1998, L'Aquila, Italy
Harold Thimbleby, User interface design with matrix algebra, ACM Transactions on Computer-Human Interaction (TOCHI), v.11 n.2, p.181-236, June 2004
Jules Desharnais , Marc Frappier , Ridha Khdri , Ali Mili, Integration of sequential scenarios, ACM SIGSOFT Software Engineering Notes, v.22 n.6, p.310-326, Nov. 1997
Jules Desharnais , Marc Frappier , Ridha Khdri , Ali Mili, Integration of Sequential Scenarios, IEEE Transactions on Software Engineering, v.24 n.9, p.695-708, September 1998
L. E. Moser , Y. S. Ramakrishna , G. Kutty , P. M. Melliar-Smith , L. K. Dillon, A graphical environment for the design of concurrent real-time systems, ACM Transactions on Software Engineering and Methodology (TOSEM), v.6 n.1, p.31-79, Jan. 1997 | specification;concurrency;state-transition diagrams;temporal logic |
631204 | Creation of Views for Reuse of Software with Different Data Representations. | Software reuse is inhibited by the many different ways in which equivalent data can be represented. We describe methods by which views can be constructed semi-automatically to describe how application data types correspond to the abstract types that are used in numerical generic algorithms. Given such views, specialized versions of the generic algorithms that operate directly on the application data can be produced by compilation. This enables reuse of the generic algorithms for an application with minimal effort. Graphical user interfaces allow views to be specified easily and rapidly. Algorithms are presented for deriving, by symbolic algebra, equations that relate the variables used in the application data to the variables needed for the generic algorithms. Arbitrary application data structures are allowed. Units of measurement are converted as needed. These techniques allow reuse of a single version of a generic algorithm for a variety of possible data representations and programming languages. These techniques can also be applied in data conversion and in object-oriented, functional, and transformational programming. | Introduction
An algorithm should be like a mathematical theorem in the sense that once the algorithm
has been developed, it should be reusable and should never have to be recoded manually.
Like other engineering artifacts, however, algorithms that are used in an application must
be adapted to fit the other parts of the application. Almost everyone is willing to reuse
the standard algorithm for sqrt, but algorithms such as testing whether a point is inside a
polygon are less likely to be reused. The argument and result types of sqrt match application
types; however, there are many possible polygon representations, so an application is unlikely
to use the same data types as a library program.
Strong typing is an important optimization: by checking types statically, a compiler
can avoid generation of runtime type-checking code, thus providing type safety while saving
time and storage. Unfortunately, rigidity of types inhibits reuse. Traditional ways of making
application data match a procedure that is to be reused are costly and discourage reuse [24].
An effective method of reuse must minimize two costs:
1. Human cost: the time required by the programmer to find the program to be reused,
to understand its documentation, and to adapt the reused program and/or application
so that they fit.
2. Computational cost: the added cost of running a reused program, compared to a
hand-written version.
This paper describes methods for reuse using views that describe how application data
types correspond to the abstract types used in generic algorithms. Algorithms based on
algebra create the views from correspondences that are specified via an easily used
graphical interface. Given the views, a compiler produces efficient specialized versions of
generic algorithms that operate directly on the application data. A single copy of a generic
algorithm can be specialized for a variety of application data representations and for a
variety of programming languages. The efficiency, encapsulation, and type safety of strong
type checking are retained, while the barriers to reuse caused by type rigidity are eliminated.
The techniques described here are an enabling technology that makes software reuse easy
and practical.
The graphical interface is illustrated in Fig. 1. In this example, we assume that the
user has existing data that describes a Christmas tree; the user would like to reuse a small
program, which calculates the area of the side of a cone, on this data. A view of the type
xmas-tree as a cone is made using the program mkv ("make view"):
(mkv 'cone 'xmas-tree)
Figure
1: Viewing a Christmas Tree as a Cone
The view is specified by correspondences between the application data type and the abstract
representation of a cone. The user first selects a "button" on the cone diagram, then selects
a corresponding item within the menu of fields of the xmas-tree type. The system draws
lines between the selected items to show the correspondences. Using these correspondences
together with equations associated with a cone, the system constructs a view of a xmas-tree
as a cone. Any generic procedure associated with cone can then be specialized and used for
the xmas-tree data:
(gldefun t1 (tree:xmas-tree)
This function, written in a Lisp-like functional form, requests the side-area of the cone
view of the tree. The function is compiled into Lisp code that computes the side-area
directly from the application data structure:
(LAMBDA (TREE)
(* 3.1415926535897931
(*
Mechanical translation can produce code for applications in other languages, such as C:
float t1c (tree)
return 3.1415926535897931
* tree-?base-radius
* sqrt(square(tree-?base-radius)
The user obtains this specialized code without having to understand the algorithm and
without having to understand the implementation of the cone abstract data type. The user
only needs to select the items in the diagram that correspond to the application data.
In this paper, we concentrate on numerical data of the kinds used in scientific and
engineering programs. We have previously described [33] [34] [35] techniques for reuse of
generic algorithms that deal with discrete data structures such as linked lists and trees;
those methods, together with the methods described in this paper, allow reuse of algorithms
that involve both discrete data structures and numerical data. The generic algorithms that
can be specialized range from simple formulas, such as the formula for the area of a circle,
to larger procedures such as testing whether a point is inside a polygon.
Section 2 gives a formal definition of views. Section 3 presents the algorithms that
construct views from correspondences using algebraic manipulation of equations. Section 4
describes the graphical user interface for specifying views. Section 5 describes several ways in
which views enable software reuse: by specialization of generic procedures, by translation of
data to a desired form, and in object-oriented, functional, or transformational programming.
Section 6 surveys related work, and a Section 7 presents conclusions. Finally, an on-line
demonstration of the program is described.
Representation and Views
Traditional data types combine two issues that should be separated to facilitate reuse:
1. the way in which application data are represented
2. procedures associated with a conceptual kind of object
These are termed implementation inheritance and interface inheritance in object-oriented
programming [9]. When the two are combined, the code of a procedure implicitly states
assumptions about details of data representation. This inhibits software reuse, since any
assumptions made in writing a procedure become requirements that must be met if the
procedure is to be reused. For effective reuse, assumptions must be minimized. Object-oriented
programming and functional programming partly separate the two issues, but still
make some representation assumptions and also incur performance penalties, as we discuss
later. Views make a clean separation between representation and procedures; compilation
techniques yield good performance and can produce code in multiple target languages.
There are many ways in which representations of equivalent data can differ:
1. names of individual variables,
2. data representations and units of measurement of variables,
3. data structures used to aggregate variables,
4. the set of variables chosen to represent an object, and
5. the conceptual method, or ontology [30], of a representation.
For example, a vector could be represented using Cartesian coordinates or polar coordinates.
In order to reuse a generic algorithm for application data, it must be possible either to
translate the data into the form expected by the algorithm, or to modify the algorithm to
work with the existing data; both can be done using views.
A view describes how an application data type corresponds to an abstract type. In effect,
the view encapsulates the application data type and makes it appear to be of the abstract
Fig. 2 illustrates how an application type pipe can be viewed as a circle that is
defined in terms of radius. The radius of the circle corresponds to the inside-diameter
of the pipe divided by 2. The view makes visible only the name radius, hiding the names
of pipe; procedures defined for circle can be inherited through the view. Note that it is
not the case that a pipe is a circle; rather, a circle is a useful view of a pipe. A second
view of a pipe as a circle, using outside-diameter, is also useful.
length
inside-diameter
outside-diameter
material
radius
pipe-as-circle
pipe
Figure
2: View as Encapsulation of Application Data
We formalize the notion of views as follows. An abstract type is considered to be an
abstract record containing a set of basis variables. A view encapsulates the application type,
hiding its names, and presents an external interface that consists of the basis variables of
the abstract type. The view allows the basis variables to be both "read" and "written". The
view emulates the abstract record by maintaining two properties:
1. Storage property: After a value z is "written" into basis variable v i
, a "read" of v i
will
yield the value z.
2. Independence property: If a "read" of basis variable v i
yields the value z, and a value
is then "written" into some basis variable v j
a "read" of v i
will still yield z.
These two properties express the behavior normally expected of a record: stored values can
be retrieved, and storing into one field does not change the values of other fields. If these
properties are maintained, then the view faithfully emulates a record consisting of the basis
variables, using the application type as its internal storage. The view inherits all the generic
procedures associated with the abstract type. The view thus makes the application type
appear to be a full-fledged implementation of the abstract type. Smith [45] uses the term
theory morphism for a similar notion; Gries [17] uses the term coupling invariant and cites
the term coordinate transformation used by Dijkstra [7].
The example of Fig. 2 illustrates a simple view in which a pipe is viewed as a circle
in terms of the basis variable radius. A "read" of radius is accomplished by dividing the
inside-diameter by 2; a "write" of radius is implemented by multiplying the value to be
written by 2 and storing it into the inside-diameter. Source code that accesses the radius
through the view and equivalent compiled code in C are shown below.
(gldefun t5 (p:pipe)
(radius (circle p)) )
float t5 (p)
return p-?inside-diameter / 2; -
(gldefun t6 (p:pipe r:real)
((radius (circle p)) := r)
return
Because the view makes the application data appear to be exactly like the abstract type,
any generic procedure that is defined for the abstract type will produce the same results
using the application data through the view. With optimized in-line compilation of the
code for access to data through views, the specialized versions of generic algorithms operate
directly on the application data and are efficient.
In practice, it is often necessary to relax the storage and independence properties slightly:
1. The storage and independence properties are assumed to hold despite representation
inaccuracy, e.g. floating-point round-off error. For example, if an application type that
uses polar coordinates is viewed as a Cartesian vector, the value of x that is "read"
may be slightly different from the value of x that was "written".
2. A view may be partial, i.e., may define only those basis variables that are used. For
example, to compute the area of a circle, only the radius is needed. Any use of an
undefined basis variable is detected as an error by the compiler.
3 Creation of Views from Correspondences
A view contains a set of procedures to read and write each basis variable of the abstract
type. These procedures could be written by hand. However, it is nontrivial to ensure that
the procedures are complete, consistent, efficient, and satisfy the storage and independence
properties. The procedures to view an application type as a line-segment (Fig. 5 below)
are 66 lines of code; it would be difficult to make such a view manually. This section
describes algorithms to derive the procedures for a view from correspondences, using symbolic
algebra. A later section describes the graphical user interface that makes it easy to specify
the correspondences.
3.1 Basis Variables and Equations
(setf (get 'circle 'basis-vars) '(radius))
(setf (get 'circle 'equations)
'((= diameter (* 2 radius))
(= circumference (* pi diameter))
(= area (* pi (expt radius 2)))))
Figure
3: Equations for Circle
Each abstract type defines a set of basis variables and a set of equations. The basis
variables and equations for a simple circle abstract type are shown in Fig. 3; those for
a line-segment are shown in Fig. 4. The equations are written in a fully parenthesized
Lisp notation with the operator appearing first in each subexpression. The basis variables
are specified by the designer of the abstract data type and constitute a contract between
the implementer and user of the generic procedures: the implementer can assume that
every use of the generic procedures will behave as if the set of basis variables is directly
implemented, and the user can assume that if the view of the application data emulates
a direct implementation of the basis variables, the generic procedures will work for the
application data.
A view is created from the correspondences provided by the graphical interface and from
the equations. Each correspondence is a pair, (abstract-var application-var), that
associates a variable of the abstract type with a corresponding variable of the application
are processed sequentially.
A view must have a procedure to read and write each basis variable, while maintaining
the storage and independence properties. The procedures are derived by symbolic algebra.
Although powerful packages such as Mathematica [54] exist, we use a simple equation solver
Our interface actually allows an algebraic expression in terms of variables or computed quantities from
the application type.
(setf (get 'line-segment 'basis-vars)
'(p1x p1y p2x p2y))
(setf (get 'line-segment 'equations)
(= p1y (y p1))
(= p2y (y p2))
(= deltay (- p2y p1y))
(= slope (/ deltay (float deltax)))
(= slope (tan theta))
(= slope (/ 1.0 (tan phi)))
(= length (sqrt (+ (expt deltax 2)
(expt deltay 2))))
(= theta (atan deltay deltax))
(= phi (atan deltax deltay))
(= deltay (* length (sin theta)))
(= deltax (* length (cos theta)))
(= deltay (* length (cos phi)))
Figure
4: Equations for Line-Segment
and a relatively large (and possibly redundant) set of simple equations, as in Fig. 4. The
equations that describe abstract data types typically are simple, so that a simple equation
solver has been sufficient.
Equations are solved by algebraic manipulations. The equation solver is given a formula
and a desired variable. If the left-hand side of the formula is the desired variable, the equation
is solved. Otherwise, an attempt is made to invert the right-hand side, using algebraic
laws, to find the desired variable; for example, the equation (= x (+ y z)) is equivalent to
(= (- x y) z) or (= (- x z) y). These manipulations are performed recursively until
the desired variable is isolated. Of course, this procedure assumes that the desired variable
occurs exactly once in the equation.
The equations that specify a variable as a tuple of other variables are used to describe
grouping relationships. Application data might specify a point in terms of separate x and y
components, or as a data structure that represents a point as a whole (either by containing
x and y components directly or by having a view that defines x and y). The user of the
graphical interface should be able to specify either the whole representation of the point,
or one or more of its components; however, it would be incorrect to specify both the whole
and a component. The treatment of tuple equations, described below, guarantees that only
correct combinations can be specified. Corresponding to each tuple definition are equations
that define how to extract the components from the tuple.
3.2 Incremental Equation Solving
An equation set is initialized by making a copy of the equations of the target abstract
type. As each correspondence between an abstract variable and an application variable is
processed, the equation set is examined to see whether any equations can be solved. This
incremental solution of the equations accomplishes several goals:
1. It produces equations for computing variables of the abstract type in terms of stored
variables of the application type. These equations are later compiled into code.
2. It produces efficient ways of computing the variables, as described below.
3. The algorithm returns a list of all variables that are defined or computable based on
the correspondences entered thus far. The buttons for these variables are removed from
the user interface, preventing the user from entering a contradictory specification.
The following algorithm, which we call var-defined, is performed for each correspondence,
(abstract-variable application-variable).
1. The abstract variable is added to a list of defined variables and to a list of solved
variables, i.e., variables that are computable from the correspondences specified so far.
2. Each equation is examined to determine what unsolved variables it contains:
(a) If there is exactly one unsolved variable, and the equation can be solved to produce
a new equation defining that variable, then
i. The right-hand side of the new equation is symbolically optimized using a
pattern-matching optimizer.
ii. The new equation is saved for later use in generating code.
iii. The variable is added to the list of solved variables.
iv. The equation is deleted.
(b) If there are no unsolved variables, the equation is deleted. This case can occur
if the equation set contains multiple equations for computing the same variable.
Assuming that the equation set is consistent, the equation will represent a mathematical
identity among variables that have already been defined.
(c) If the equation defines a tuple variable, and some component of the tuple has
been solved, the variable is added to a list of deleted tuple variables, and the
equation is deleted.
(d) If the equation contains a deleted tuple variable, the equation is deleted. A
deleted tuple variable can never become defined.
3. If any variables were solved in step 2.a., step 2 is repeated.
4. Finally, a list of variables is returned; this list includes the newly defined variable, any
variables that were solved using equations, and any deleted tuple variables.
Figure
5: Viewing LS1 as a Line Segment
Fig. 5 shows correspondences between an application type LS1 and a line-segment.
When the correspondences are specified using the graphical user interface, var-defined is
called as each individual correspondence is specified. The correspondences are saved so that
a view can be remade without using the graphical interface. Each correspondence is a pair
of an abstract variable and a field of the application type; for the example of Fig. 5 the
correspondences are:
(LENGTH SIZE)
Fig. 6 shows the calls to var-defined and the actions performed as each correspondence is
processed; each action is labeled with the number of the step in the algorithm.
After all correspondences have been processed, two results have been produced:
1. a list of abstract variables that are defined as references to the application type.
2. a list of equations that define other abstract variables.
1. Enter var-defined,
2c. deleting tuple
2d. deleting eqn (= P1X (X P1))
2d. deleting eqn (= P1Y (Y P1))
4. exit, vars (P1 P1Y)
1. Enter var-defined,
4. exit, vars (LENGTH)
1. Enter var-defined,
2a. solved eqn (= SLOPE (TAN THETA))
2a. solved eqn (= PHI (- (/ PI 2.0) THETA))
2a. solved eqn (= DELTAY
(* LENGTH (SIN THETA)))
2a. solved eqn (= DELTAX
(* LENGTH (COS THETA)))
3. repeating step 2.
2a. solved eqn (= DELTAY (- P2Y P1Y))
giving (= P2Y (+ DELTAY P1Y))
2b. deleting eqn (= SLOPE (/ DELTAY
2b. deleting eqn (= SLOPE (/ 1.0 (TAN PHI)))
2b. deleting eqn (= LENGTH
2b. deleting eqn (=
(ATAN DELTAY DELTAX))
2b. deleting eqn (= PHI (ATAN DELTAX DELTAY))
2b. deleting eqn (= DELTAY
(* LENGTH (COS PHI)))
2b. deleting eqn (= DELTAX
(* LENGTH (SIN PHI)))
3. repeating step 2.
2c. deleting tuple
2d. deleting eqn (= P2X (X P2))
2d. deleting eqn (= P2Y (Y P2))
4. exit, vars (P2 P2Y DELTAX DELTAY PHI
1. Enter var-defined,
2a. solved eqn (= DELTAX (- P2X P1X))
giving (= P1X (- P2X DELTAX))
3. repeating step 2.
4. exit, vars (P1X P2X)
Figure
Incremental Equation Solving
Abstract
Type
\Gamma\Psi
@
@
@
@ @R
\Theta
\Theta
\Theta
\Theta
Figure
7: Variable Dependency Graph for LS1
The equations form a directed acyclic graph that ultimately defines each variable on the
left-hand side of an equation in terms of references to the application type; Fig. 7 shows the
graph for the LS1 example. If each variable on the right-hand side of an equation is replaced
by its defining equation, if any, and the process is repeated until no further replacements are
possible, the result will be an expression tree whose leaves are references to the application
type. 2 This is easily proved by induction. Initially, the only solved variables are those that
are defined by correspondence with the application type. Each variable that becomes solved
via an equation is defined in terms of previously solved variables. Therefore, the graph of
variable references is acyclic, and replacement of variables by their equational definitions will
result in an expression tree whose leaf nodes are references to the application type.
In general, the abstract type will define procedures that compute all of the variables
shown in the diagram, as functions of the basis variables. Therefore, it is only strictly
necessary for the view type to define procedures to compute the basis variables; all other
variables could be derived from those. However, this approach might be inefficient. For
example, it would be inefficient to calculate the LENGTH from the basis variables for the
application type, since the LS1 type stores the LENGTH directly as the field SIZE. It
is desirable to compute each variable as directly as possible from the application data, i.e.,
using as few data references and operations as possible. The var-defined algorithm operates
incrementally and produces an equation to calculate each abstract variable as soon as it is
possible to do so; the equations therefore are close to references to the application type.
It would be possible to guarantee optimal computation of each variable by implementing a
search algorithm:
This replacement process is performed by the GLISP compiler when code that uses the view is compiled.
1. Assign to each abstract variable that is defined as a field of the application type a cost
of 1 and mark it solved.
2. Examine each equation in the equation set to determine which variables can be solved
in terms of existing solved variables. Assign, as the cost of such a solution, the sum of
the costs of its components and the cost of each operator. If the variable is unsolved
or has a higher cost, adopt the new equation as its definition and the new cost as its
cost.
3. Repeat step 2 until no further redefinitions occur.
We have not implemented this algorithm because the var-defined algorithm approximates
it and has produced excellent results in practice; this algorithm would be useful if some
operators had much higher cost than others.
3.3 Storing Basis Variables
Some generic procedures both read and write data; thus, it is necessary to define methods
that "store" into basis variables through the view. We assume that values can be stored only
into basis variables; this is a reasonable restriction, since it corresponds to a record consisting
of the basis variables in an ordinary programming language. Each storing method is a small
procedure whose arguments are an instance of the application type and a value that is to be
"stored" into the basis variable. The procedure must update the application data in such a
way that the storage and independence properties are maintained. Without this constraint,
the method used to "store" a variable would be ambiguous, and generic algorithms might
behave differently with different data implementations.
The variables of the abstract type that correspond directly to fields of the application
type are called transfer variables; a list of these is saved. In Fig. 7, the transfer variables
are P1Y, LENGTH, THETA, and P2X. Storing new values for all transfer variables following a
change in value of a basis variable would accomplish a storing of the basis variable. Such a
procedure could be derived in a trivial way:
1. compute the values of all basis variables, other than the one to be stored, from the
application data;
2. compute values of the transfer variables from the basis variables;
3. store these values into the application data structure.
However, such a procedure might be inefficient. It is desirable to update the smallest possible
subset of stored variables. The algorithm below accomplishes this.
1. A set of basis equations is created. This is done by initializing an equation set with
all equations of the abstract type, then calling var-defined for each basis variable.
The result is a set of equations for computing each non-basis variable in terms of
basis variables and, implicitly, a dependency graph that shows the dependency of all
variables on basis variables.
2. The set xfers is computed; this is the subset of the transfer variables that depend
on the basis variable that is to be stored. Dependency is determined by recursively
computing the union of the leaf nodes of the expression tree for the transfer variable,
as implicitly defined by the basis equations.
3. The set dep is computed; this is the subset of the basis variables that some member of
xfers depends on.
4. Code is generated to compute each basis variable in dep, other than the basis variable
to be stored, from the application data.
5. Code is generated to compute each transfer variable in xfers and to store it into the
corresponding field of the application structure. Temporary variables are generated
for intermediate variables that are used in computing the transfer variables if they are
used more than once; otherwise, the intermediate variables are expanded using the
basis equations.
The result of this algorithm is a procedure that revises the application data structure
only as much as necessary to emulate a "store" into a basis variable while leaving the values
of other basis variables unchanged. One such procedure is created for each basis variable. In
the special case where a basis variable corresponds exactly to a transfer variable and does
not affect the value of any other transfer variable, no procedure needs to be created.
(GLAMBDA (VAR-LS1 P1Y)
(DELTAY := (- P2Y P1Y))
(DELTAX := (- P2X P1X))
Figure
8: Method to Store P1Y into a LS1 Data Structure
An example of a procedure to store the basis variable P1Y into an LS1 is shown in Fig. 8.
Although P1Y corresponds directly to the LOW field, it is also necessary to update the values
of the SIZE and ANGLE fields in order to leave the value of the basis variable P2Y unchanged.
The RIGHT field does not need to be updated.
3.4 Creating Application Data from Basis Variables
Some generic procedures create new data structures; for example, two vectors can be added
to produce a new vector. Therefore, a view must have a procedure that can create an instance
of the application type from a set of basis variable values. This is similar to storing a basis
variable, except that all basis variables are stored simultaneously.
(GLAMBDA (SELF P1X P1Y P2X P2Y)
(LET (DELTAY DELTAX)
(DELTAY := (- P2Y P1Y))
(DELTAX := (- P2X P1X))
ANGLE (ATAN DELTAY DELTAX)
Figure
9: Method to Create a LS1 Data Structure from Basis Variables
Fig. 9 shows the procedure that creates a LS1 data structure from a set of line-segment
basis variables. Two local variables, DELTAX and DELTAY, have been created since these
variables are used more than once. When compiled by GLISP, the A function produces
an instance of the LS1 data structure with the specified component values. When tuple
substructures are involved, additional A functions are inserted to create them as well. The
GLISP compiler invokes this method when compiling an A function; as a result, the use of
views can be recursive. For example, a line-segment could be specified by two points P1
and P2, each of which is in polar coordinates r and theta with a view as a Cartesian vector.
In this case, creating a new line-segment instance would also create new polar components.
3.5 Data Translation through Views
Suppose that there are two data types t 1
and t 2
, each of which has a view as abstract
type a. Then it is easy to translate data d 1
of type t 1
1. For each basis variable v i
of the abstract type a, compute its value from data d 1
using
the view from type t 1
to a.
'i
'i
'i
view view
Figure
10: Application Data that Share a View
2. Create a data structure of type t 2
from the set of basis variables using the view from
to a.
However, this algorithm might not be efficient. For example, types t 1
and t 2
might store
the same non-basis variable, which could be transferred directly without computing basis
variable values. For this reason, we have developed another algorithm for creating data
translation procedures. The algorithm begins by finding the unique abstract type a that is
the intersection of the views from the source and goal types (the name of the view to be
used for each type can be specified if the intersection is not unique). Next, the set xfers
is computed; this is the set of transfer variables of the goal type, i.e., the variables of the
abstract type that correspond to stored fields in the goal type. Code is then generated to
create an instance of the goal type using the values of the transfer variables from the shared
view, a, of the source type. Since the view computes these transfer variables as close to the
source data as possible, the resulting code is often more efficient, and never worse, than a
version that computes basis variables first. Optimal code could be guaranteed by a search
process, as described earlier.
3.6 Unit Conversion
Application data can use various units of measurement. Most programming languages omit
units of measurement entirely: there is no way to state the units that are used, much less
to check consistency of units. GLISP allows units to be specified, and it automatically
performs unit conversion [37] and checks validity. Therefore, unit conversion is performed
automatically for all uses of views described in this paper.
Checking
A user could specify a partial set of correspondences between the abstract type and application
type: some generic procedures defined by the abstract type may involve only a subset
of the basis variables. For example, it is possible to compute the slope of a line-segment
if only the abstract variable theta is defined. However, any actual errors are detected by
our system.
After the user enters correspondences, mkv issues a warning if any basis variables remain
unsolved. A method to store a basis variable is produced only if that variable was solved and
all basis variables in the set dep (described in section 3.3) were solved. A method to create
an application data structure from basis variables is produced only if all basis variables were
solved. An attempt to specialize a generic procedure that uses missing parts of a view type
will result in errors detected by the GLISP compiler.
It is possible for the user to specify a correspondence that can be computed, but cannot
be "stored". For example, a variable of the abstract type could be defined as the sum of two
stored fields of the application type; it is not possible to determine unambiguously how to
"store into" this variable. An attempt to store into such a variable will result in an error
detected by the GLISP compiler.
Views do not replace or evade strong typing. Indeed, all of the code that is produced
is correctly typed and can be mechanically translated to a strongly typed language. Views
enhance type checking by checking units of measurement, and they provide encapsulation:
when a view is used, only the operations defined by the view are available. The view
mechanism provides the benefits of encapsulation while enhancing reusability and producing
more efficient code than other encapsulation mechanisms [9].
4 Graphical User Interface
A graphical user interface makes it easy to specify correspondences between an application
type and an abstract type using a mouse pointing device. The program, called mkv (for
"make view"), is called with the goal abstract type and source type as arguments. mkv opens
a window and draws a menu of the values available from the source type and a diagram or
menu of variables of the abstract type (Fig. 1).
The user selects items from the goal diagram by clicking the mouse on labeled "buttons"
on the diagram. The interface program highlights a button when the mouse pointer is moved
near it; clicking the mouse selects the item. The user then selects a corresponding item from
the menu that represents the application data; a line is drawn between the two items to
show the correspondence. The user can also specify an algebraic expression, involving one
or more application data fields, by selecting OP from the command menu and specifying an
expression tree of operators and operands.
A diagram can present buttons for many ways to represent a given kind of data. 3 With
such a diagram, it is likely that there will be buttons that correspond directly to the
existing form of the data. Fig. 11 shows the diagram for a line-segment. Even though
line-segment is a simple concept, a line segment could be specified in many ways: by two
end points, or by one end point, a length, and an angle, etc. The diagram is intended
to present virtually all the reasonable possibilities as buttons. This interface has several
advantages:
1. Diagrams are easily and rapidly perceived by humans [26], and they are widely used
in engineering and scientific communication [11].
3 If there were alternative representations that were sufficiently different to require different diagrams, a
menu to select among alternative diagrams could be presented to the user first.
Figure
Initial Diagram for a Line Segment
2. The interface is self-documenting: the user does not need to consult a manual, or know
details of the abstract data type, to specify a view.
3. The interface is very fast, requiring only a few mouse clicks to create a view.
4. The user does not have to perform error-prone algebraic manipulations.
It is easy to add diagrams for new abstract types. A drawing program allows creation
of diagrams, including buttons. The only other thing that is needed is a specification of the
basis variables and equations for the abstract type, as shown in Figs. 3 and 4.
Achieving Reuse with Views
Views are important for practical reuse because they achieve a clean separation between the
representation of application data and the abstract data types used in generic procedures.
The user obtains the benefits of reuse without having to understand and conform to a
standard defined by someone else. In this section, we describe several ways in which views
can be applied to achieve reuse:
1. by specialization of generic procedures through views
2. by translation of data
3. by creation of wrappers or transforms for use with object-oriented, functional, or
transformational programming.
5.1 Specialization of Generic Procedures
The GLISP compiler [31, 32] can produce a specialized version of a generic procedure by
compiling it relative to a view. The result is a self-contained procedure that performs the
action of the generic procedure directly on the application data. The specialized procedure
can be used as part of the application program. GLISP is a high-level language with abstract
data types that is compiled into Lisp; it is implemented in Common Lisp [47]. GLISP types
include data structures in Lisp and in other languages. GLISP is described only briefly here;
for more detail, see [33] and [31].
Data representation is a barrier to reuse in most languages because the syntax of program
code depends on the data structures that are used and depends on whether data is stored
or is computed. This prevents reuse of code for alternative implementations of data. GLISP
uses a single syntax, similar to a Lisp function call, to access features of an object [33]:
The interpretation of this form depends on the compile-time type of the object:
1. If feature is the name of a data field of object, code to access that field is compiled.
2. If feature is a message selector that is defined for the type of object, then
(a) a runtime message send can be compiled, or
(b) the procedure that implements the message can be specialized or compiled in-line,
recursively.
3. If feature is the name of a view defined for the type of object, then the type of object
is locally changed to the view type.
4. If feature is defined as a function, the code is left unchanged as a function call.
5. Otherwise, a warning message is generated that feature is undefined.
In this way, a generic program can access a feature of an application type without making
assumptions about how that feature is implemented. This is similar to object-oriented
programming; however, since actual data types are known at compile time, message sending
can be eliminated and replaced by in-line compilation or by calls to specialized procedures.
This is equivalent to making transformations to the code, and it significantly improves
efficiency of the compiled code.
A view is implemented as a GLISP type whose stored form is the application data; the
abstract type associated with the view is a super-class of the view type. The view type
defines messages to compute the basis variables of the abstract type from the application
type. As a simple example, consider a (handwritten, for simplicity) view of a pipe as a
circle (Fig. 2):
(pipe-as-circle (p pipe)
prop ( (radius
(inside-diameter p) /
supers (circle))
The stored form of pipe-as-circle is named p and is of type pipe. The basis variable
radius is defined as the inside-diameter of the pipe divided by 2. circle is a superclass,
so all methods of circle are inherited by pipe-as-circle:
(gldefun t7 (p:pipe) (area (circle p)))
float t7 (p)
return 0.78539816
The function t7 has an argument p whose type is pipe. The code (circle p) changes the
type of p to the view type pipe-as-circle. The definition of area is inherited from circle
and compiled in-line; this definition is in terms of the radius, which is expanded in-line
as the inside-diameter of the pipe divided by 2. The use of the view has zero cost at
runtime because the optimizer has combined the translation from the view with the constant
-, producing a new constant -=4.
Once a view has been made, all of the generic procedures of the abstract type are available
by automatic specialization. Thus, a single viewing process allows reuse of many procedures.
As a larger example, we consider a function that finds the perpendicular distance of a point
to the left of a directed line segment (if the point is to the right, the distance will be negative).
Although this is not a large function, it is not easy to find an effective algorithm in a reference
book or to derive it by hand: graduate students assigned to produce such a procedure by
hand for the LS1 data type report that it takes from 20 minutes to over an hour. Books
often omit important features: [3] assumes that a human will determine the sign of the
result. Even if a formula is found, it may not be expressed in terms of the available data.
Some versions of the formula involve division by numbers that can be nearly zero. Reuse of a
carefully developed generic procedure is faster, less costly, and less error-prone than writing
one by hand.
Code that results from the viewing and compilation process is presented below. We do not
expect a user of our system to read or understand such code. While some authors [42] have
proposed that the user read and edit code that is produced by an automatic programming
system, we do not: it is not easy to read someone else's code, and this is especially true for
machine-generated code that has been optimized. We expect that users will treat the output
of our system as a "black box", as is often done with library subroutines.
Fig. 12 shows the generic function line-segment-leftof-distance. Fig. 13 shows a
specialized version of it for a LS1 record in C. This code was produced by GLISP compilation
(gldefun
line-segment-leftof-distance
(ls:line-segment p:vector)
(deltay ls) * (
Figure
12: Generic Function: Distance of Point to the Left of a Line Segment
float lsdist (l, p)
return cos(l-?angle)
* (p-?x
(l-?right
l-?size * cos(l-?angle)));
Figure
13: leftof-distance specialized for LS1 in C
followed by mechanical translation into C; the resulting C code has no dependence on Lisp.
Since the LS1 type is quite different from the abstract type line-segment, this example
illustrates that a single generic procedure can be reused for a variety of quite different
implementations of data. The specialized version is efficient (two multiplies and division
by the length were removed by algebraic optimization) and is expressed in terms of the
application data, without added overhead. While repeated subexpressions sometimes appear
in specialized code, these can be removed by the well-understood compiler technique of
common subexpression elimination [43].
A useful viewpoint is that there is a mapping between an abstract data type and the
corresponding application type, and an isomorphism between the two based on a generic
algorithm and a specialized version of that algorithm. Just as compilation of a program in
an ordinary programming language produces an equivalent program in terms of lower-level
operations and data implementations, specialization of a generic procedure produces an
equivalent procedure that is more tightly bound to a specific implementation of the abstract
data. Similar viewpoints are found in mathematical definitions of isomorphism (e.g., [39], p.
129), in denotational semantics [15], and in work on program transformation [45] [17]. [13]
describes views in terms of such isomorphisms. However, we note that many applications
use approximations that do not satisfy the strict mathematical definition of isomorphism.
5.2 Reuse by Data Translation
One way to reuse a program with data in a different format is to translate the data into
the right form; translation is also required to combine separate data sets that have different
formats. As use of computer networks increases, users will often need to combine data from
different sources or use data with a program that assumes a different format. The ARPA
Knowledge-Sharing Project [30] addresses the problem of sharing knowledge bases that were
developed using different ontologies. Writing data translation programs by hand requires
human understanding of both data formats. [40] presents a language for describing parameter
lists and a system that produces interface modules that translate from a source calling
sequence to a target calling sequence. Our paper [33] described automatic construction of
translation procedures from correspondences; the techniques in this paper extend those.
Standardization of data representations and formats is one way to achieve interoperability.
However, it is difficult to find standards that fit everyone's needs, and conformity to standards
is costly for some users. Views provide the benefits of standardization without the costs. As
described in Section 3.5, if there are views from two application types to a common abstract
type, a data conversion procedure to convert from one application type to the other can
be generated automatically. Thus, interchange of data requires only that the owner of each
data set create a view that describes how the local data format corresponds to an abstract
type. Only n views are needed to translate among any of n data formats, and knowledge
of others' data formats is unnecessary. Use of remote procedure calls to servers across a
network could be facilitated by advertising an abstract data type expected by the remote
procedure; if users create views of their data as the abstract data, the translation of data
can be performed automatically.
Materialization of a new data set may be computationally expensive, but the cost is
minor for small data sets. It is also reasonable to translate a large data set incrementally,
or as a whole if a large amount of computation will be performed on it; [25] found that
translation of data between phases of a large compiler was a minor cost.
5.3 Object-Oriented, Functional, and Transformational Program-
ming
The algorithms in this paper can be used with other styles of program development.
One benefit of object-oriented programming (OOP) is reuse of methods that have been
pre-defined. At the same time, OOP systems often impose restrictions that application
objects must meet, e.g., that an application object must have the same stored data as its
superclass or must provide certain methods with the same names that are used in the existing
methods. Thus, reuse with OOP requires conformance to existing standards. Views allow
new kinds of objects to be used with existing methods.
The term adapter or wrapper [9] denotes an object class that makes application data
appear to be an instance of a target class. The wrapper object contains a pointer to the
application data and performs message translation to implement the messages required of
a member of the target class. In the pipe example shown in Fig. 2, a wrapper class
pipe-as-circle would implement the radius message expected by the class circle by
sending an inside-diameter message to the pipe and returning the result of this message
divided by 2. The algorithms and user interfaces described in this paper could be used
to create wrapper classes. The messages expected by the target class correspond to basis
variables. The equations produced in making views can easily be converted into methods
in the appropriate syntax for the OOP system. We have previously described [35] the use
of wrapper objects to allow display and direct-manipulation editing of user data by generic
editor programs.
There are two disadvantages of using wrapper objects [9]. It is necessary to allocate a
wrapper object at runtime, which costs time and storage. Second, since translation of data
is performed interpretively, there is overhead of additional message sending, and the same
translation may be performed many times during execution. However, in cases where these
costs are tolerable, wrapper objects are an easy way to achieve reuse.
Views can be used in a related way with functional programming. Functions can be
created to calculate the value of each basis variable from data of the application type. In the
pipe example, a function radius(p:pipe) would be created that returns the value of the
inside-diameter divided by 2. Storing of basis variables could be implemented by storing
into the application data, for functional languages that allow this, or by creating new data
with the updated values in the case of strict functional languages.
The Polya language [17] [8] allows a user to specify a set of transformations that are
made to the intermediate code of a generic algorithm; these transformations are equivalent
to the transformations performed by the GLISP compiler [31]. The algorithms presented
here could be used to generate transforms for a language such as Polya.
6 Related Work
6.1 Software Reuse
Krueger [24] is an excellent survey of software reuse; it also gives criteria for effective software
reuse. Biggerstaff and Perlis [4] contains papers on theory and applications of software
reuse. Mili [29] provides an extensive survey of approaches to software reuse, emphasizing
the technical challenges of reuse for software production. Artificial intelligence approaches
to software engineering are described in [1], [28], and [41]. Some papers from these sources
are reviewed individually in this section.
6.2 Software Components
Weide [52] proposes a software components industry analogous to the electronic components
industry, based on formally specified and unchangeable components with rigid interfaces.
Because the components would be verified, unchangeable, and have rigid interfaces, errors
in using or modifying them would be prevented. Views, as described in this paper, allow
components to be adapted to fit the application.
6.3 Languages with Generic Procedures
Programming languages such as Ada, Modula-2 [21] [27], and C++ [48] allow parameterized
modules; by constructing a module containing generic procedures for a parameterized
abstract data type, the user obtains a specialized version of the module and its procedures.
However, these languages allow much less parameterization than is possible using views; for
example, it is not possible to define a procedure that works for either Cartesian or polar
vectors, and it is not even possible to state units of measurement in these languages.
6.4 Functional and Set Languages
ML [53] [38] is like a strongly typed Lisp; it includes polymorphic functions (e.g., functions
over lists of an arbitrary type) and functors (functions that map structures, composed
of types and functions, to structures). ML also includes references (pointers) that allow
imperative programming. ML functors can instantiate generic modules such as container
types. However, ML does not allow generics as general as those described here. Our system
allows storing into a data structure through a view; for example, a radius value can be
"stored" into a pipe through a view. Our system also allows composition of views.
Miranda [50] is a strongly typed, purely functional language that supports higher-order
functions. While this allows generic functions to be written, it is often difficult to write
efficient programs in a purely functional language [38]: a change to data values requires
creation of a new structure in a functional language.
6.5 Transformation Systems
Transformation systems generate programs starting from an abstract algorithm specification;
they repeatedly apply transformations that replace parts of the abstract algorithm with code
that is closer to an implementation, until executable code is finally reached. Our views specify
transformations from features of abstract types to their implementations.
Kant et al. [23] describe the Sinapse system for generating scientific programs involving
simulation of differential equations over large spatial grids for applications such as seismic
analysis. Sinapse accepts a relatively small program specification and generates from it a
much larger program in Fortran or other languages by repeatedly applying transformations
within Mathematica [54]. This system appears to work well within its domain of applicability.
KIDS [46] can transform general algorithms into executable versions that are highly
efficient for combinatorial problems. The user selects transformations to be used and supplies
a formal theory for the domain of application. This system is interesting and powerful, but
its user must be mathematically sophisticated.
Gries and Prins [16] proposed a system that would use syntactic transformations to
specify the implementation of abstract algorithms. [31] describes related techniques that
were implemented earlier. Volpano [51] and Gries [17] describe transformation of programs
by syntactic coordinate transformations for variables or for patterns involving uses of vari-
ables. Our views require fewer specifications because they operate at semantic (type-based
and algebraic) levels, rather than at the syntactic level; most patterns of use are handled
automatically by the algebraic optimization of the compiler.
Berlin and Weise [2] used partial evaluation to improve the efficiency of scientific pro-
grams. Using information that some features of a problem are constant, their compiler
performs as many constant calculations as possible at compile time, resulting in a program
that is specialized and runs faster. Our system incorporates partial evaluation by means of
in-line compilation and symbolic optimization.
6.6 Views
Goguen describes a library interconnection language called LIL [12] and has implemented the
language OBJ3 [13] [14] that incorporates parameterized programming and views. A view in
OBJ3 is a mapping between a theory T and a module M that consistently maps sorts (types)
of T to sorts of M and operations of T to operations of M. OBJ3 is based on formal logical
theory using order-sorted algebra; it operates as a theorem prover in which computation is
performed by term rewriting. The authors state ([14], p. 56):
OBJ3 is not a compiler, but is rather closer to an interpreter. The associa-
tive/commutative rewrite engine is not efficient enough for very large problems.
Tracz [49] describes LILEANNA, which implements LIL for construction of Ada packages;
views in LILEANNA map types, operations, and exceptions between theories. Our system
creates views from correspondences between application types and mathematical objects; the
possible correspondences are more general than the one-to-one correspondences specified in
OBJ3 views. Our system produces efficient specialized procedures in ordinary programming
languages and is intended to be used as a program generation system.
Garlan [10] and Kaiser [22] use views to allow multiple tools in a program development
environment to access a common set of data about the program being developed. Their
MELD system can combine features, which are collections of object classes and methods, to
allow additive construction of a system from selected component features.
Hailpern and Ossher [18] describe views in OOP that are subsets of the methods defined
for a class. They use views to restrict use of certain methods; for example, a debugger could
use methods that were unavailable to ordinary programs. This system has been used in the
development environment [19].
6.7 Data Translation
IDL (Interface Description Language) [25] allows exchange of large structured data, possibly
including structure sharing, between separately written components of a large software
system such as a compiler. IDL performs representation translation, so that different
representations of data can be used by the different components. Use of IDL requires that
the user write precise specifications of the source and target data structures.
Herlihy and Liskov [20] describe a method for transmission of structured data over a net-
work, with a possibly different data representation at the destination. Their method employs
user-written procedures to encode and decode the data into transmissible representations.
They also describe a method for transmission of shared structures. [5] describes a system
that automatically generates stub programs to interconnect processing modules that are in
different languages or processors; this work is complementary to the techniques presented in
this paper.
The Common Object Request Broker Architecture (CORBA) [6] includes an Interface
Definition Language and can automatically generate stubs to allow interoperability of objects
across distributed systems and across languages and machine architectures.
Purtilo and Atlee [40] describe a system that translates calling sequences by producing
small interface modules that reorder and translate parameters as necessary for the called
procedure.
In all of these cases, the emphasis is on relatively direct translation of data, focusing
on issues of record structure, number representation, etc. The techniques described in this
paper could be used to extend these approaches to cases where the ontology, or method
of description of objects, differs. Specialization of generic algorithms is more efficient than
interpretive conversion of data.
6.8 Object-oriented Programming
OOP is popular as a mechanism for software reuse; Gamma et al. [9] describe design patterns
that are useful for OOP. We have described how views could be used to construct wrapper
objects that make application objects appear to be members of a desired class. Use of views
with the GLISP compiler extends good ideas in OOP:
1. OOP makes the connection between a message and the corresponding procedure at
runtime; this is often a significant cost [9]. C++ [48] has relatively efficient message
dispatching, at some cost in flexibility. Because GLISP [31] can specialize a method
in-line and optimize the resulting code in context, the overhead of interpretation is
eliminated, and often there is no extra cost.
2. Interpretation of messages in OOP postpones error checking to runtime. With views
and GLISP, type inference and in-line expansion cause this checking to be done statically
3. Views provide a clean separation between application data and the abstract type, while
some OOP systems require conformance between an instance and a superclass, e.g.,
the instance may have to contain the same data variables. With views, there can be
multiple views as the same type, e.g. a pipe can be viewed as a circle in two distinct
ways. Views allow a partial use of an abstract type, e.g. the area of a circle can be
found without specifying its center.
4. In OOP, the user must learn the available classes and messages. Some OOP operating
systems involve over 1,500 classes, so this is nontrivial. With views, the user does not
have to understand the abstract type, but only has to indicate correspondences between
the application type and the abstract type. The user interface is self-documenting.
5. Our system allows translation to a separate application language, such as C, without
requiring that the application be written in a particular language.
7 Discussion and Conclusions
We believe that generation of application code by compilation through views is an efficient
and practical technique. Expansion of code through views is recursive at compile time,
so that composition of views is possible. Generic algorithms are often written in terms of
other generics. While the examples presented in this paper have been small ones for clarity,
the approach does scale to larger algorithms. We have produced specialized versions of
algorithms that comprise about 200 lines of code in C, e.g., finding the convex hull of a set
of points (which uses the leftof-distance generic function defined for a line-segment),
finding the perimeter, area, and center of mass of a polygon, etc. Compilation and translation
to C of a 200-line program takes approximately 10 seconds on a workstation; this is much
faster than human coding. The output code is efficient, often better than code produced by
human programmers. Because of the efficiency of the generated code, we consider specialized
compilation of generic procedures to be the best way to achieve reuse using views.
Our approach has several significant advantages:
1. The user interface is self-documenting and allows views to be created quickly and easily.
2. Specialized versions of generic algorithms for an application can be created in seconds.
3. The specialized algorithms can be produced in a standard programming language that
is independent of our system.
4. The code that is produced is optimized and efficient.
5. Static error checking is performed at compile time.
Related techniques have been used to create programs by connecting diagrammatic
representations of physical and mathematical laws [36].
The sizes of the various components of the system, in non-comment lines of Lisp source
code, are shown in the following table:
Component: Lines:
Make Views: mkv 265
Symbolic Algebra 1,003
Graphical Interface 1,020
GLISP Compiler 9,097
Translation to C 831
Total 12,216
The Symbolic Algebra component includes algorithms used by mkv and described in
this paper. The amount of code required for making views is not too large. Translation
of specialized code into languages other than C should not be difficult: the C translation
component is not large, and much of it could be reused for other languages; the translation
uses patterns that are easily changed.
Acknowledgments
Computer equipment used in this research was furnished by Hewlett Packard and IBM.
I thank the anonymous reviewers for their helpful suggestions for improving this paper.
--R
IEEE Trans.
"Compiling Scientific Code Using Partial Eval- uation,"
CRC Standard Mathematical Tables and Formulae
Software Reusability (2 vols.
"A Packaging System for Heterogeneous Execution Environments,"
"The Common Object Request Broker: Architecture and Specification,"
A Discipline of Programming
"On Program Transformations,"
Design Patterns: Elements of Reusable Object-Oriented Software
"Views for Tools in Integrated Environments,"
"Reusing and Interconnecting Software Components,"
"Principles of Parameterized Programming,"
"Introducing OBJ,"
The Denotational Description of Programming Languages
"A New Notion of Encapsulation,"
"The Transform - a New Language Construct,"
"Extending Objects to Support Multiple Interfaces and Access Control,"
Integrating Tool Fragments,"
"A Value Transmission Method for Abstract Data Types,"
Data Abstraction and Program Development using Modula-2
"Synthesizing Programming Environments from Reusable Features,"
"Scientific Programming by Automated Synthesis,"
"Software Reuse,"
"IDL: Sharing Intermediate Representations"
"Why a Diagram is (Sometimes) Worth 10,000 Words,"
The Modula-2 Software Component Library
Automating Software Design
"Reusing Software: Issues and Research Directions,"
"Enabling Technology for Knowledge Sharing,"
"GLISP: A LISP-Based Programming System With Data Abstraction,"
"Negotiated Interfaces for Software Reuse,"
"Software Reuse through View Type Clusters,"
"Software Reuse by Compilation through View Type Clusters,"
"Generating Programs from Connections of Physical Models,"
"Conversion of Units of Measurement,"
ML for the Working Programmer
Introduction to Discrete Structures
"Module Reuse by Interface Adaptation,"
Readings in Artificial Intelligence and Software Engineering
The Programmer's Apprentice
A Mathematical Theory of Global Program Optimization
Models of Thought
"KIDS: A Semiautomatic Program Development System,"
Knowledge-based Software Development System,"
the Language
"LILEANNA: A parameterized programming language,"
"An Overview of Miranda,"
"The Templates Approach to Software Reuse,"
"Reusable Software Components,"
Mathematica: a System for Doing Mathematics by Computer
Advances in Computers
--TR
--CTR
Jeffrey Parsons , Chad Saunders, Cognitive Heuristics in Software Engineering: Applying and Extending Anchoring and Adjustment to Artifact Reuse, IEEE Transactions on Software Engineering, v.30 n.12, p.873-888, December 2004
Ted J. Biggerstaff, A perspective of generative reuse, Annals of Software Engineering, 5, p.169-226, 1998
Don Batory , Bart J. Geraci, Composition Validation and Subjectivity in GenVoca Generators, IEEE Transactions on Software Engineering, v.23 n.2, p.67-82, February 1997
Gordon S. Novak, Jr., Software Reuse by Specialization of Generic Procedures through Views, IEEE Transactions on Software Engineering, v.23 n.7, p.401-417, July 1997 | visual programming;abstract data type;symbolic algebra;software reuse;data conversion;program transformation;generic algorithm |
631218 | Compositional Programming Abstractions for Mobile Computing. | AbstractRecent advances in wireless networking technology and the increasing demand for ubiquitous, mobile connectivity demonstrate the importance of providing reliable systems for managing reconfiguration and disconnection of components. Design of such systems requires tools and techniques appropriate to the task. Many formal models of computation, including UNITY, are not adequate for expressing reconfiguration and disconnection and are, therefore, inappropriate vehicles for investigating the impact of mobility on the construction of modular and composable systems. Algebraic formalisms such as the -calculus have been proposed for modeling mobility. This paper addresses the question of whether UNITY, a state-based formalism with a foundation in temporal logic, can be extended to address concurrent, mobile systems. In the process, we examine some new abstractions for communication among mobile components that express reconfiguration and disconnection and which can be composed in a modular fashion. | Introduction
The UNITY [1] approach to concurrency has been influential
in the study of distributed systems in large part
because of its emphasis on design aspects of the programming
process, rather than simply serving as a tool for veri-
fication. The technique has been used to derive concurrent
algorithms for a wide range of problems, and to specify and
verify correctness even in large software systems [2]. How-
ever, because of the essentially static structure of computations
that can be expressed, standard UNITY is not a suitable
tool for addressing the problems faced by the designers
of mobile computing systems, such as cellular telephone
networks. This paper addresses the problem of modeling
dynamically reconfiguring distributed systems with an extension
of the UNITY methodology, which we refer to as
Mobile UNITY.
While formal models capable of expressing reconfiguration
have been explored from the algebraic perspective [3]
and from a denotational perspective [4], [5], very few state-based
models can naturally express reconfiguration of com-
ponents. Also, while algebraic models such as the -
calculus may be adequate for expressing reconfiguration,
it is not so clear how to handle the issue of disconnection.
Recent work has recognized the importance of introducing
location and failures as concepts in mobile process algebras
[6], [7], [8], but because the authors are primarily
concerned with modeling mobile software agents and the
effect of host failures on such systems, they do not directly
P. McCann is with Lucent Technologies, Naperville, Illinois.
E-mail: [email protected] . G.-C. Roman is with the
Department of Computer Science, Washington University, St. Louis,
Missouri. E-mail: [email protected].
address disconnection of components that continue to function
correctly but independently.
In addition to directly modeling reconfiguration and dis-
connection, Mobile UNITY attempts to address design issues
raised by mobile computing. These issues stem from
both the characteristics of the wireless connection and the
nature of applications and services that will be demanded
by users of the new technologies. Broadly speaking, mobile
computing leads to systems that are decoupled and context
dependent, and brings new challenges to implementing the
illusion of location-transparency. By examining the trends
in applications and services currently being implemented
by system designers, we hope to gain insight into the fundamentals
of the new domain and outline opportunities for
extensions to models of computation such as UNITY.
Decoupling. The low bandwidth, frequent disconnec-
tion, and high latency of a wireless connection lead to a de-coupled
style of system architecture. Disconnections may
be unavoidable as when a host moves to a new location,
or they may be intentional as when a laptop is powered
off to conserve battery life. Systems designed to work
in this environment must be decoupled and opportunis-
tic. By "decoupled," we mean that applications must be
able to run while disconnected from or weakly connected
to servers. "Opportunistic" means that interaction can be
accomplished only when connectivity is available. These
aspects are already apparent in working systems such as
filesystems and databases that relax consistency so that
disconnected hosts can continue to operate [9], [10].
Decoupling corresponds to the issue of modularity in system
design, although in the case of mobility modularity is
taken to a new extreme. Because of user demands, components
must continue to function even while disconnected
from the services used. Also, components must be ready
to interface with whatever services are provided at the current
location; the notion that a component is statically
composed with a fixed set of services must be abandoned.
The separation of interfaces from component implementations
has long been advocated in the programming language
community, but these notions need to be revisited
from the more dynamic perspective of mobile computing.
Context Dependencies. In addition to being weakly
connected, mobile computers change location frequently,
which leads to demand for context dependent services.
A simple example is the location dependent World Wide
Web browser of Voelker et al [11]. This system allows the
user to specify location-dependent queries for information
about the current surroundings and the services available.
A more general point of view is evidenced in [12], which
notes that application behavior might depend on the to-
IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. XX, NO. X, MONTH 1998
tality of the current context, including the current location
and the nearness of other components, like the identity of
the nearest printer or the group of individuals present in a
room. The dynamic nature of interaction among components
brings with it unprecedented challenges analogous to
those of open software systems. Components must function
correctly in any of the myriad configurations that might oc-
cur. They must also continue to function as components
are reconfigured.
In Mobile UNITY, although interaction is specified on a
basis and is usually conditioned on the proximity
of two components, the model in general can express inter-action
that is conditioned on arbitrary global predicates,
such as the willingness of two components to participate
in an interaction, the presence of other components, or the
presence of interference or noise on a wireless link. Also,
any given collection of pairwise interactions compose naturally
to produce compound interactions that may span
many components.
Location Transparency. While some systems will be
mobile-aware and require explicit reasoning about location
and context, other applications naturally make use
of location-transparent messaging. For example, Mobile
IP [13] attempts to provide this in the context of the In-
ternet. It is illustrative of the mobility management issues
that must be addressed by designers. Our previous
work [14] modeled Mobile IP in Mobile UNITY. Other location
registration schemes have also been dealt with for-
mally, for example with the -calculus [15] and with standard
UNITY [16]. Although the latter work is similar to
ours in that it is an application of UNITY to mobility, it
deals mainly with the part of the algorithm running in the
fixed network and only indirectly with communication between
the mobile nodes and the fixed network.
From the perspective of formal modeling, location registration
provides a rich source of problems to use as ex-
amples. Such mobility management algorithms show that
even if the goal is transparent mobility, the designers of
such a protocol must face the issues brought on by mobil-
ity. Explicit reasoning about location and location changes
are required to argue that a given protocol properly implements
location transparency. Also, location registration
protocols may form the very basis for location- and context-dependent
services, which might make use of location information
for purposes other than routing.
The remainder of the paper is organized as follows. Section
II presents a brief introduction to standard UNITY
and the modifications we have made to express context-dependent
interactions. The last part of the section gives
a proof logic that accommodates the changes. Section III
makes use of the new notation to express a new abstraction
for communication called transient sharing. Section IV
continues by introducing and formally expressing a new
communication mechanism called transient synchroniza-
tion. Concluding remarks are presented in Section V.
II. Mobile UNITY Notation
Our previous work [17] presented a notation and logic for
interactions among mobile components. The pair-wise
limitation simplified some aspects of the discussion,
for instance, side-effects of an assignment were limited to
only those components directly interacting with the component
containing that assignment. However, the proof
logic presented was very complex and operational, sometimes
relying on sequencing of operations to define precise
semantics. In this paper we present a simpler expression of
transient interactions that focuses attention on the implications
that component mobility has for the basic atomicity
assumptions made by a system model, and provide a proof
logic that is much more concise than our earlier work. In
the process, we generalize interactions to include multiple
participants.
The new model is developed in the context of very low-level
wireless communication, in order to focus on the essential
details of transient interaction among mobile com-
ponents. The key concept introduced in this section is the
reactive statement, which allows for the modular specification
of far-reaching and context-dependent side effects
that a statement of one component may have. Using this
primitive and a few others as a basis, in subsequent sections
we present high-level language constructs that may
be specified and reasoned about.
A. Standard UNITY
Recall that UNITY programs are simply sets of assignment
statements which execute atomically and are selected
for execution in a weakly fair manner-in an infinite
computation each statement is scheduled for execution infinitely
often. Two example programs, one called sender
and the other receiver , are shown in Figure 1. Program
sender starts off by introducing the variables it uses in the
declare section. Abstract variable types such as sets and
sequences can be used freely. The initially section defines
the allowed initial conditions for the program. If a
variable is not referenced in this section, its initial value is
constrained only by its type. The heart of any UNITY program
is the assign section consisting of a set of assignment
statements.
The execution of program sender is a weakly-fair interleaving
of its two assignment statements. The assignment
statements here are each single assignment statements, but
in general they may be multiple-valued, assigning different
right-hand expressions to each of several left-hand vari-
ables. Such a statement could be written ~x := ~e, where
~x is a comma-separated list of variables and ~e is a comma-separated
list of expressions. All right-hand side expressions
are evaluated in the current state before any assignment
to variables is made. Execution of a UNITY program
is a nondeterministic but fair infinite interleaving of the
assignment statements. Each produces an atomic transformation
of the program state and in an infinite execution,
each is selected infinitely often. The program sender , for
example, takes the variable bit through an infinite sequence
of 1's and 0's. The next value assigned to bit is chosen
MCCANN AND ROMAN: COMPOSITIONAL PROGRAMMING ABSTRACTIONS FOR MOBILE COMPUTING 3
program sender
declare
initially
assign
program receiver
declare
sequence of boolean
initially
assign
history
Fig. 1. Two standard UNITY programs.
nondeterministically, but neither value may be forever excluded
The program receiver has two variables, one of which
is a sequence of boolean values, but contains only one
assignment statement. The statement uses the notation
history \Delta bit to denote the sequence resulting from appending
bit to the end of history . This expression is evaluated on
the right-hand side and assigned to history on the left hand
side, essentially growing the history sequence by one bit on
each execution. Note that we have not yet introduced the
notion of composition, so the two programs should be considered
completely separate entities for now.
A.1 Proof Logic
Rather than dealing directly with execution sequences,
the formal semantics of UNITY are given in terms of program
properties that can be proven from the text. The fair
interleaving model leads to a natural definition of safety
and liveness properties, based on quantification over the
set of assignment statements. We choose to use the simplified
form of these definitions presented in [18] and [19] for
the operators co and transient. Each operator may be applied
to simple non-temporal state predicates constructed
from variable names, constants, mathematical operators,
and standard boolean connectives. For example, if p and
q are state predicates, the safety property p co q means
that if the program is in a state satisfying p, the very next
state after any assignment is executed must satisfy q. Proof
of this property involves a universal quantification over all
statements s, showing that each will establish q if executed
in a state satisfying p.
The notation fpgsfqg is the standard Hoare-triple notation
[20]. In addition to the quantification, we are also
obligated to show p ) q, which in effect takes the special
statement skip (which does nothing) to be part of the
quantification. Without this qualification, some cases of co
would not be true safety properties, because they could be
violated by the execution of an action which does nothing.
As an example of co, consider the property
Clearly, this property holds of program sender , because
every statement, when started in a state where bit is 1 or
0, leaves bit in a state satisfying 1 or 0. This property is
an example of a special case of co because the left- and
right-hand sides are identical. Such a property may be
abbreviated with the operator stable, as in stable
1. Because the initial conditions satisfy the
predicate, i.e.,
an invariant, written invariant
it is not clear from context to which program a property
applies, it can be specified explicitly, as in
invariant
Progress properties can be expressed with the notation
transient p, which states that the predicate p is eventually
falsified. Under UNITY's weak fairness assumption, this
can be defined using quantification as
transient
which denotes the existence of a statement which, when
executed in a state satisfying p, produces a state that does
not satisfy p. For example, the property transient
0 can be proven of the program sender , because of the
statement that sets bit to 1.
The transient operator can be used to construct other
liveness properties. The reader may be more familiar with
the ensures operator from UNITY, which is really the
conjunction of a safety and a liveness property.
ensures q
- transient (p - :q)
The ensures operator expresses the property that if the
program is in a state satisfying p, it remains in that state
unless q is established, and, in addition, it does not remain
forever in a state satisfying p but not q.
While ensures can express simple progress properties
that are established by a single computational step, proofs
of more complicated progress properties often require the
use of induction to show that the program moves through a
whole sequence of steps in order to achieve some goal. This
notion is captured with the leads-to operator, written 7!.
Informally, the property p 7! q means that if the program
is in a state satisfying p, it will eventually be in a state
satisfying q, although it may not happen in only one step
and the property p may be falsified in the meantime. For
4 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. XX, NO. X, MONTH 1998
example, consider the property
receiver (1)
which states that if receiver is ever in a state where the
length of the history sequence is 3, it will eventually be in
a state where this length is 5. In the meantime, however,
the length of the sequence may (and does!) take on the
value 4, which satisfies neither side of the relation.
Proofs of leads-to properties are carried out inductively,
with ensures as a base case. Formally, the rules of inference
can be summarized as:
(basis) p ensures q
where S is any set of predicates. The rules are written in
hypothesis-conclusion form; each has an assumption above
the line, and a deduction below the line. The basis rule, for
instance, allows one to conclude p 7! q from p ensures q.
The transitivity rule could be used in a proof of Equation 1,
taking p to be the formula q to be
the formula and r to be the formula
5. The disjunction rule is useful for
breaking up a complicated proof into cases.
The proof rules introduced above come from standard
UNITY, but they are also a part of Mobile UNITY. How-
ever, the notion of what is a basic state transition is different
in the two models, because Mobile UNITY can express
the location- and context-dependent state transitions that
typify mobile computing. Although this means the basic
Hoare triple must be redefined, the rest of the UNITY inference
toolkit, including other rules for carrying out high-level
reasoning which are not shown here, are preserved.
A.2 Composition.
Before giving the new composition mechanisms of Mobile
UNITY, we should first describe the standard UNITY
mechanisms for program composition. The most basic
composition mechanism is known as program union, and
we can use the UNITY union operator, [], to construct a
new system, denoted by sender [] receiver . Operationally,
the new system consists of the union of all the program
variables, i.e., variables with the same name refer to the
same physical memory; the union of all the assignment
statements, which are executed in a fair atomic interleav-
ing; and the intersection of the initial conditions.
Communication between sender and receiver thus takes
place via the shared variable bit . The sender writes an
infinite sequence of 1's and 0's to this variable, fairly inter-
leaved, and the receiver occasionally reads from this variable
to build its history sequence. Note that the receiver
may not see every value written by the sender, because execution
is a fair interleaving, not turn-taking. Also, the
resulting history generated by the receiver may have duplicate
entries because the assignment statements of sender
may be excluded from execution for a finite amount of time.
Another way to compose systems is through the use of
superposition, which combines the components by synchronizing
statements rather than sharing variables. Superposition
on an underlying program F proceeds by adding new
statements and variables to F such that the new statements
do not assign to any of the original underlying variables of
F , and each of the new statements is synchronized with
some statement of F . This allows for (1) the maintenance
of history variables that do not change the behavior of the
underlying program but are needed for certain kinds of
proofs, and (2) the construction of layered systems, where
the underlying layers are not aware of the higher layer variables
For example, the receiver, instead of being composed via
program union, could have used superposition to synchronize
its assignment to history with the assignments in the
sender that update bit , thus ensuring that it would receive
an exact history of the values written to bit . However,
superposition is limited because communication can take
place in only one direction. Also, like program union, it
is an essentially static form of composition that provides a
fixed relationship between the components. It also would
require that the single statement in the program receiver
be broken up into two statements, one for recording 1's
and the other for recording 0's. The challenge of mobile
computing is to model the system in a more modular fash-
ion, where the receiver does not know about the internal
workings of the sender, and which allows the receiver to be
temporarily decoupled from the sender during periods of
disconnection. Towards this end we must investigate novel
constructs for expressing coordination among the compo-
nents, so that for instance, the receiver can get an exact
history sequence while the components are connected, but
may lose information while disconnected.
A major contribution of [1] was the examination of program
derivation strategies using union and superposition
as basic construction mechanisms. From a purely theoretical
standpoint, it is natural to ask whether we can re-think
these two forms of program composition by reconsidering
the fundamentals of program interaction and what
abstractions should be used for reasoning about composed
programs.
B. System Structuring
If the two programs sender and receiver represent mobile
components, or software running on mobile hardware, then
it is not appropriate to represent the resulting system as
a static composition sender [] receiver . Mobile computing
systems exhibit reconfiguration and disconnection of the
components, and we would like to capture these essentially
new features in our model. Composition with standard
UNITY union would share the variable bit throughout system
execution and would prohibit dynamic reconfiguration
and disconnection of the components.
In this section we introduce a syntactic structure that
MCCANN AND ROMAN: COMPOSITIONAL PROGRAMMING ABSTRACTIONS FOR MOBILE COMPUTING 5
makes clear the distinction between parameterized program
types and processes which are the components of the sys-
tem. A more radical departure from standard UNITY is
the isolation of the namespaces of the individual processes.
We assume that variables associated with distinct processes
are distinct even if they bear the same name. For example,
the variable bit in the sender from the earlier example is no
longer automatically shared with the bit in the receiver-
they should be thought of as distinct variables. To fully
specify a process variable, its name should be prepended
with the name of the component in which it appears, for
example sender.bit or receiver.bit . The separate namespaces
for programs serve to hide variables and treat them
as internal by default, instead of universally visible to all
other components. This will facilitate more modular system
specifications, and will have an impact on the way program
interactions are specified for those situations where
programs must communicate. Figure 2 shows the system
sender-receiver which embodies these concepts.
System sender-receiver
program sender at -
program receiver at -
Components
receiver at - 0
[] sender at - 0
Interactions
Fig. 2. Example system notation.
The system starts out by declaring its name, in this case
sender-receiver . Then, a set of programs are given, each of
which is structured like a standard UNITY program, the
details of which are elided here. A new feature of these programs
is the addition of a program variable - which stands
for the current location of the program. It could have been
placed in the declare section with the other program vari-
ables, but it is promoted here to the same line as the program
name to emphasize its importance when reasoning
about mobile computations. The precise semantics of the
location variable will be discussed in Section II-C.
Assume for now that the internals of each program are
as given in Figure 1. In Figure 2, these programs are really
type declarations that are instantiated in the Components
section. In general, the program types may
parameters that are bound by the instan-
tiations; for example, the receiver could have been declared
as receiver (i), and might have been instantiated as
receiver(1) at - 1 . A whole range of receiver s could have
been instantiated in this way.
The transient interactions among the program instances
should be given in the Interactions section. Constructs
used for specifying interactions are unique to Mobile
UNITY and will be introduced in Section II-D. For now
we leave the details of this section blank, but it will be
developed further to capture the location- and context-dependent
aspects of communication between the components
C. Location
Mobile computing systems must operate under conditions
of transient connectivity. Connectivity will depend
on the current location of components and therefore location
is a part of our model. Just as standard UNITY does
not constrain the types of program variables, we do not
place restrictions on the type of the location variable -. It
may be discrete or continuous, single or multi-dimensional.
This might correspond to latitude and longitude for a physically
mobile platform, or it may be a network or memory
address for a mobile agent. A process may have explicit
control over its own location which we model by assignment
of a new value to the variable modeling its location.
For instance, a mobile receiver might contain the statement
- := NewLoc(-), where the function NewLoc returns a new
location, given the current location. In general, such an assignment
could compute a new location based on arbitrary
portions of the program state, not just the current location.
In a physically moving system, this statement would need
to be compiled into a physical effect like actions on mo-
tors, for instance. In a mobile code (agents) scenario, this
statement would have the effect of migrating an executing
program to a new host. Even if the process does not exert
control over its own location we can still model movement
by an internal assignment statement that is occasionally
selected for execution; any restrictions on the movement of
a component should be reflected in this statement. Also, -
may still appear on the right-hand side of some assignment
statements if there is any location-dependent behavior internal
to the program.
D. Interactions
When disconnected, components should behave as ex-
pected. This means that the components must not be
made too aware of the other programs with which they
interface. The sender, for example, must not depend on
the presence of a receiver when it transmits a value. It
is unrealistic for the sender to block when no receiver is
present. However, there are constraints that the two programs
must satisfy when they are connected. We wish to
express these constraints when the programs are composed,
while not cluttering up the individual components in such
a way that they must be aware of and dependent on the
existence of other programs. This argues for the development
of a coordination language sufficiently powerful to
express these interactions and to preserve the modularity
of a single program running in isolation. As we will see in
the sections that follow, this composition mechanism will
have certain aspects in common with UNITY union and
other traits characteristic of superposition.
The new constructs presented here, although they are
6 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. XX, NO. X, MONTH 1998
primarily motivated by the need to manage context-dependent
coordination of the components in the Interactions
section, are really orthogonal to the system structur-
ing; one can put these in stand-alone UNITY programs as
well. In fact, the formal proof logic abstracts away from the
structuring conventions and assumes a flat set of program
variables (properly qualified by the name of the program in
which they appear) and assignment statements. However,
for now, we present each construct and give an example of
how each may be used in the Interactions section of the
system sender-receiver .
D.1 Extra Statements.
Suppose that the sender and receiver can only communicate
when they are at the same location, and we wish
to express the fact that sender.bit is copied to receiver.bit
when this is true. We might begin the Interactions section
with
receiver.bit := sender.bit
when
This kind of interaction can be treated like an extra program
statement that is executed in an interleaved fashion
with the existing program statements. The predicate
following when is treated like a guard on the statement
(when can be read as if ). The statement as written copies
the value from sender.bit to receiver.bit when the two programs
are at the same location. Here "at the same loca-
tion" is taken to mean that the programs can communicate,
but in general the when predicate may take into account
arbitrary factors such as the distance between the components
or the presence of other components. Note that
this interaction alone is not guaranteed to propagate every
value written by the sender to the receiver; it is simply another
interleaved statement that is fairly selected for execution
from the pool of all statements. Therefore, the sender
may write several values to bit before the extra statement
executes once even when the programs are co-located. The
receiver may of course move away (by assigning a new value
to receiver .- before any value is copied. Also, the construction
of receiver.history is not necessarily an accurate account
of the history of bits written to receiver.bit , because
the execution of the history-recording action is completely
unconstrained with respect to the extra statement and will
be interleaved in a fair but arbitrary order.
D.2 Reactions.
A reactive statement provides a mechanism for making
certain that each and every value written to sender.bit also
appears at receiver.bit . Such a statement would appear in
the Interactions section as
receiver.bit := sender.bit
reacts-to
Operationally, the reactive statement is scheduled to execute
whenever the predicate following reacts-to is true.
In this sense it is a statement with higher priority than the
other statements in the system. In general, there may be
other reactive statements implementing other interactions.
Informally, all of the reactive statements have equal priority
and are executed in an interleaved fashion, much like a
standard UNITY program. The set of reactive statements,
sometimes denoted with the symbol R, continues to execute
until no statement would have an effect if executed.
Formally this is known as the fixed point of R. Note that
this particular statement is idempotent, so if there is no interference
from other reactive statements, it reaches fixed
point after one execution. In Section II-E we show how
this construct can be captured in an axiomatic semantics.
Because this propagation occurs after every step of either
component, it effectively presents a read-only shared-
variable abstraction to the receiver program, when the two
components are co-located. Later we will show how to generalize
this notion so that variables shared in a read/write
fashion by multiple components can be modeled. In gen-
eral, reactive statements allow for the modeling of side effects
that a given non-reactive statement may have when
executed in a given context, such as a particular arrangement
of components in space.
D.3 Inhibitions.
Note that even with reactive propagation of updates to
sender.bit , the receiver still will not construct an accurate
history of the values that appear on receiver.bit . Because
of the nondeterministic interleaving of statements,
several values may be written to receiver.bit between executions
of the statement that updates receiver.history . In
a real wireless communication system, closely synchronized
clocks and timing considerations would ensure that values
are read at the proper moment so as not to omit or duplicate
any bits in the sequence.
An inhibitor provides a mechanism for constraining
UNITY's nondeterministic scheduler when execution of
some statement would be undesirable in a certain global
context. Adding a label to a statement lets us express inhibition
in a modular way, without modifying the original
statement. For example, consider a new sender program,
given in Figure 3. The two statements each now carry a
program sender at -
declare
initially
assign
Fig. 3. A new version of program sender that counts bits.
label; we can refer to the first as sender.s0 and the second
as sender.s1 . Each also updates the integer counter to re-
MCCANN AND ROMAN: COMPOSITIONAL PROGRAMMING ABSTRACTIONS FOR MOBILE COMPUTING 7
flect the number of bits written so far. This counter serves
as an abstraction for a real-time clock, virtual or actual,
that may be running on program sender .
If we assume that the statement of program receiver is
labeled read , for example,
read :: history := history \Delta bit
then we might add the following set of clauses to the Interactions
section:
inhibit sender.s0
when sender.counter ? length(receiver.history)
when sender.counter ? length(receiver.history)
[] inhibit receiver.read
when length(receiver.history) - sender.counter .
The net effect of inhibit s when p is a strengthening of
the guard on statement s by conjoining it with :p and
thus inhibiting execution of the statement when p is true.
The inhibitions given above constrain execution so that the
receiver reads exactly one bit for every bit written by the
sender. Note that the constraint applies equally well when
the components are disconnected; this is not unrealistic
because we can assume that realtime clocks can remain
roughly synchronized even after disconnection, even though
the reactive propagation of values will cease.
Reactive statements must not be inhibited.
D.4 Transactions.
With reactive propagation and inhibitions as given
above, execution of the system will append values to
receiver:history even when the two components are
disconnected. Thus there will be subsequences of
receiver:history containing redundant copies of the last
value written by the sender. In an actual wireless transmission
system, the receiver does have some indication of
receipt of a transmission, and would not build a history
that depended only on timing constraints. In our model,
this might be represented as an extra third state that can
be taken on by the wireless transmission medium. Assume
that the bit in each component was therefore declared:
and initialized to ?, which means simply that no transmission
is currently taking place.
Transmission of a bit by the sender then involves placing
a value on the communications medium, and then returning
it to a quiescent state. The receiver may then reactively
record the value written. A transaction provides a form of
sequential execution, and can be used by the statements in
sender that write new values to sender.bit :
A transaction consists of a sequence of assignment state-
ments, enclosed in angle brackets and separated by semi-
colons, which must be scheduled in the specified order with
no other nonreactive statements interleaved in between.
The assignment statements of standard UNITY may be
viewed as singleton transactions. Note that reactive statements
are allowed to execute to fixed point at each semi-colon
and at the end of the transaction; this lets us write
a new receiver program, shown in Figure 4. Here the first
program receiver at -
declare
sequence of boolean
initially
assign
flag := history
reacts-to bit 6=? -:flag
[] flag := 0
reacts-to bit =?
Fig. 4. A new version of program receiver that reacts to transactions.
reactive statement records values written to the shared bit ,
and the variable flag is added to make the reactive recording
idempotent. Another reactive statement is added to
reset flag when the communications medium returns to a
quiescent state.
Transactions may be inhibited, but may not be reactive.
E. Proof Logic
Now we give a logic for proving properties of programs
that use the above constructs. The execution model has
assumed that each non-reactive statement is fairly selected
for execution, is executed if not inhibited, and then the set
of reactive statements, denoted R, is allowed to execute until
it reaches fixed point, after which the next non-reactive
statement is scheduled. In addition, R is allowed to execute
to fixed point between the sub-statements of a transaction.
These reactively augmented statements thus make up the
basic atomic state transitions of the model and we denote
them by s , for each non-reactive statement s. We denote
the set of non-reactive statements (including transactions)
by N . Thus, the definitions for basic co and transient
properties become:
and
transient
Even though s is really a possibly inhibited statement
augmented by reactions, we can still use the Hoare triple
notation fpgs fqg to denote that if s is executed in a state
satisfying p, it will terminate in a state satisfying q. The
Hoare triple notation is appropriate for any terminating
computation.
We first deal with statement inhibition. The following
rule holds for non-reactive statements s, whether they are
8 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. XX, NO. X, MONTH 1998
transactions or singleton statements:
fpgs fqg
(2)
We define i(s) to be the disjunction of all when predicates
of inhibit clauses that name statement s. Thus, the first
part of the hypothesis states that if s is inhibited in a state
satisfying p, then q must be true of that state also. The
notation s R denotes the statement s extended by execution
of the reactive statement set R. For singleton, non-transactional
statements, frgs R fqg can be deduced from
frgs R fqg (3)
where H may be computed as the strongest postcondition
of r with respect to s, or guessed at as appropriate. We
take frgsfHg from the hypothesis to be a standard Hoare
triple for the non-augmented statement s. The notation
FP (R) denotes the fixed-point predicate of the set of re-active
statements, which can be determined from its text.
The "in R" must be added because the proof of termination
is to be carried out from the text of the reactive
statements, ignoring other statements in the system. This
can be accomplished with a variety of standard UNITY
techniques.
For statements that consist of multiple steps in a trans-
action, we have the rule
where w may be guessed at or derived from r and q as
appropriate. This represents sequential composition of a
reactively-augmented prefix of the transaction with its last
sub-action. Then Equation 3 can be applied as a base
case. This rule may seem complicated, but it represents
standard axiomatic reasoning for ordinary sequential pro-
grams, where each sub-statement is a predicate transformer
that is functionally composed with others.
The notation and proof logic presented above provide
tools for reasoning about concurrent, mobile systems.
Apart from the redefinition of the basic notion of atomic
transitions, we keep the rest of the UNITY inference toolkit
which allows us to derive more complex properties in terms
of these primitives. In the following sections, we will show
how the programming notation can be used to construct
systems of mobile components that exhibit much more dynamic
behavior than could be easily expressed with standard
UNITY.
III. Transient Sharing
In the previous sections, we presented a notation and
logic for reasoning about systems of mobile components.
In this section and in Section IV, we attempt to build
higher level abstractions out of those low-level primitives
that will contribute to the design of systems that are
decoupled and context-sensitive. To be successful, such
abstractions should be familiar to designers, should take
into account the realities of mobile computing, should
be implementable, and should have a strong underlying
formal foundation. An obvious starting point is the
communication mechanisms from standard
UNITY, namely shared variables and statement synchro-
nization. This section examines a variant of sharing suited
to mobile computing systems and gives an underlying semantics
in terms of the notation we have already developed.
In the mobile setting, variables from two independently
moving programs are not always connected, and this is reflected
in the model by the isolation of each of the names-
paces, as was the case with sender.bit and receiver.bit from
our earlier example. However, with the addition of a re-active
propagation statement to the Interactions section,
these two variables took on some of the qualities of a shared
variable. While the two components were at the same lo-
cation, any value written by the sender was immediately
visible to the receiver . The semantics of reactive statements
guarantee that such propagation happens in the
same atomic step as the statement sender.s0 or sender.s1 .
Sharing may also be an appropriate abstraction for communication
at a coarser granularity; for example, one might
think of two mobile hosts as communicating via a (virtual)
shared packet, instead of a single shared bit. This is realistic
because of the lower level protocols, such as exponential
back-off, that are providing serialized access to the communications
medium. At an even coarser (more abstract)
level, there might be data structures that are replicated
on each host, access to which is serialized by a distributed
algorithm implementing mutual exclusion. Of course, no
such algorithm can continue to guarantee both mutual exclusion
and progress in the presence of disconnection, but
our (so far informal) notion of a transiently shared variable
does not require consistency when disconnected.
In what follows, we package these notions into a coordination
construct that can be formally specified and reasoned
about. As a running example, we consider a queue of
documents to be output on a printer. Assume that a lap-top
computer, connected by some wireless communication
medium, is wandering in and out of range of the printer, so
it maintains a local cache of this queue. When the laptop is
in range of the printer, updates to the queue are atomically
propagated, expressed as a transient sharing of the queue.
This may be denoted by the expression
The operations on the queue could include the laptop appending
or deleting items from the queue, and the printer
deleting items from the head of the queue as it finishes each
job.
The - relationship can be defined formally in terms of
reactive statements that propagate changes. Because the
sharing is bidirectional, there is slightly more complexity
than the earlier example where a single reactive statement
could propagate values in one direction. In the present
case, we need a mechanism for detecting changes and selectively
propagating only new values. Therefore we add
MCCANN AND ROMAN: COMPOSITIONAL PROGRAMMING ABSTRACTIONS FOR MOBILE COMPUTING 9
additional variables to each program that model the previous
state of the queue. In program laptop, this variable
is called q printer:q , and in program printer , this variable is
called . The reactive statements that detect and
propagate changes are
printer.q , printer.q laptop:q , laptop.q printer:q :=
laptop.q , laptop.q , laptop.q
reacts-to laptop.q 6= laptop.q printer:q
laptop.q , laptop.q printer:q , printer.q laptop:q :=
printer.q , printer.q , printer.q
reacts-to printer.q 6= printer.q laptop:q
which execute when any history variable is different from
the current value of the variable which it is tracking, when
the components are connected. Each statement updates
both history variables as well as the remote copy of the
queue. This can be thought of as "echo cancellation," in
that the remote copy is kept the same as its history vari-
able, and the reverse reaction is kept disabled. In addition,
we add statements that simply update the history vari-
ables, without propagating values, when the components
are disconnected:
laptop.q printer:q := laptop.q
reacts-to laptop .- 6= printer .-
printer.q laptop:q := printer.q
reacts-to printer .- 6= laptop .-
These statements reflect the fact that because disconnection
may take place at any moment, one component cannot
know that its change actually did propagate to the remote
component and so the local behavior (update of the history
variable) must be exactly the same in both cases.
Although the reactions given above may meet our informal
expectations for a shared variable while connection is
continuous, there are some subtle issues that arise when
disconnection and reconnection are allowed. For instance,
when disconnection takes place, the laptop and printer each
have separate identical copies of the queue. If changes
are made independently, for instance, if the laptop adds
a few items and the printer deletes a few items, an inconsistent
state arises which may present a problem upon re-
connection. The semantics given above are well defined for
this case: whichever component makes the first assignment
to the reconnected queue will have its copy propagated to
the other component. This may be undesirable; documents
which have already been printed may be re-inserted into the
queue, or documents which have been added by the laptop
while disconnected may be lost.
Instead of wiping out these changes we would like to integrate
them according to some programmer-specified policy.
For inspiration we can look to filesystems and databases
like [9] and [10] that operate in a disconnected mode. Here
the program variables would be replicated files or records of
a database, and update propagation is possible only when
connectivity is available. These systems also provide a way
for the programmer to specify reintegration policies, which
indicate what values the variables should take on when connectivity
is re-established after a period of disconnection.
We call this an engage value. The programmer may also
wish to specify what values each variable should have upon
disconnection. We call these disengage values. For exam-
ple, the print queue example may be extended with the
following
laptop.q - printer.q when laptop
engage laptop.q \Delta printer.q
disengage ffl, printer.q
The engage value specifies that upon reconnection, the
shared queue should take on the value constructed by appending
printer.q to laptop.q . The disengage construct
contains two values; the first is assigned to laptop.q and
the second is assigned to printer.q upon disconnection.
The values given empty laptop.q and leave the printer.q
untouched. This is justified because the queue would realistically
reside on the printer during periods of disconnection
and the laptop would have no access to it. However,
any documents appended to the queue by the laptop are
appended to the print queue upon reconnection.
Formally, the above construct would translate into reactions
that take place when a change in the connection status
is detected. Again we add an auxiliary history variable,
this time to record the status of the connection, denoted
by status laptop:q;printer:q . For engagement, we add
laptop.q , printer.q , status laptop:q;printer:q :=
laptop.q
which integrates both values when the connection status
changes from down to up. For disengagement, we add
laptop.q , printer.q , status laptop:q;printer:q :=
ffl, printer.q , false
reacts-to status laptop:q;printer:q
printer .- 6= laptop .-
which assigns different values to each variable when the
connection status changes from up to down. In the absence
of interference, each of these statements executes once and
is then disabled.
Systems like [9] and [10] have a definite notion of reintegration
policies like engage values when a client reconnects
to a fileserver or when two replicas come into contact. Specification
of disengage values may be of less practical significance
unless disconnection can be predicted in advance.
Although this is not feasible for rapidly reconfiguring systems
like mobile telephone networks, it may in fact be a
good abstraction for the file hoarding policies of [9], which
can be carried out as a user prepares to take his laptop
home at the end of a workday, for instance.
Predictable disconnection is not possible in every situa-
tion, for example when we try to model directly a mobile
telephone system. Users travel between base stations at
will and without warning. Also, an operating system for a
wireless laptop may be attempting to hide mobility from
Construct Description Definition
Read-only
transient sharing
(A.x is read by B.y)
B.y , B.y A:x , A.xB:y := A.x , A.x , A.x
reacts-to A.x 6= A.xB:y - p
A.xB:y := A.x
reacts-to :p
Read-write
transient sharing
engage(A.x , B.y)
when p
value e
Engagement 1 A.x , B.y , status A:x;B:y := e, e, true
disengage(A.x , B.y)
when p
value
Disengagement 2 A.x , B.y , status A:x;B:y := d 1 , d 2 , false
reacts-to status A:x;B:y - :p
I
Transient sharing notational constructs.
its users and should be written in such a way that it can
handle sudden, unpredictable disconnection. Such a system
would also be more robust against network failures
not directly related to location. Because of well known results
on the impossibility of distributed consensus in the
presence of failures [21], providing engage and disengage
semantics in these settings is possible only in a probabilistic
sense. This may be adequate; consider, for instance,
the phenomenon of metastable states [22]. Almost every
computing device in use today is subject to some probability
of failure due to metastability, but the probability is so
low that it is almost never considered in reasoning about
these systems. A similarly robust implementation of engage
and disengage may be possible. However, the basic
semantics of - do not imply distributed consensus and are
in fact implementable.
The transient sharing construct given above is a relationship
between two variables, but it is compositional in a very
natural way. For instance, suppose we would like to distribute
the print jobs among two different printers. This
could be accomplished by simply adding another sharing
relationship of the form
printer.q - printer2.q when true
which specifies that the queue should be shared with
printer2 always. Each printer would have atomic access to
this shared queue and could remove items from the head
as they are printed. Because all reacts-to statements are
executed until fixed point, any change to one of the three
variables is propagated to the other two, when the laptop
is co-located with the printer. This transitivity is a major
factor contributing to the construction of modular systems,
as it allows the statement of one component to have far-reaching
implicit effects that are not specified explicitly in
the program code for that component.
A summary of the notation developed in this section appears
in Table I, which also breaks up the - construct into
two uni-directional sharing relationships. In general these
can be combined in arbitrary ways, but remember that
any proofs of correctness require proof that R terminates,
so not every construction will be correct. For example, if
there are any cycles in the sharing relationship, and if two
different variables on a given cycle are set to distinct values
in the same assignment statement, then it is not possible to
prove termination. This is analagous to UNITY's restriction
that each statement assigns a unique value to each
left-hand variable. Also, termination of R may be difficult
to prove if some engagement or disengagement values cause
other when predicates to change value.
The transient sharing abstraction presented here has
shown promise as a way to manage the complexity of con-
current, mobile systems. Based on a familiar programming
paradigm, that of shared memory, it provides a mechanism
for expressing highly decoupled and context-dependent sys-
tems. The abstraction is apparently a good one for low-level
wireless communication, and mutual exclusion protocols
that implement the abstraction at a coarser level of
granularity may be simple generalizations of existing repli-
cation, transaction, or consistency algorithms. This section
presented a formal definition for the concept that facilitates
reasoning about systems that make use of it.
IV. Transient Synchronization
The previous section presented new abstractions for
shared state among mobile components, where such sharing
is necessarily transient and location dependent, and where
the components involved execute asynchronously. How-
ever, synchronous execution of statements is also a central
part of many models of distributed systems. In this section
we investigate some new high-level constructs for synchronizing
statements in a system of mobile components,
trying to generalize the synchronization mechanisms of existing
non-mobile models. For example, CSP [23] provides
1 If engagement is used without a corresponding disengagement, an
extra reaction must be added to reset status A:x;B:y to false when p
becomes false.
Similarly, if disengagement is used without a corresponding en-
gagement, an extra reaction must be added to set status A:x;B:y to
true when p becomes true.
MCCANN AND ROMAN: COMPOSITIONAL PROGRAMMING ABSTRACTIONS FOR MOBILE COMPUTING 11
a general model in which computation is carried out by a
static set of sequential processes, and communication (in-
cluding pure synchronization) is accomplished via blocking,
asymmetric, synchronous, two-party interactions called In-
put/Output Commands. The I/O Automata model [24]
communication via synchronization of a named
output action with possibly many input actions of the same
name. Statement synchronization is also a part of the
UNITY model, where it provides a methodology for proving
properties of systems that can only be expressed with
history variables and for the construction of layered sys-
tems. Synchronous composition can also be used in the
refinement process [25], although our emphasis here is on
composition of mobile programs rather than refinement.
In UNITY, synchronous execution is expressed via su-
perposition, in which a new system is constructed from an
underlying program and a collection of new statements.
Because the goal is to preserve all properties of the underlying
program in the new system, the new statements must
not assign values to any of the variables of the underlying
program. In this way, all execution behaviors that were
allowed by the underlying program executing in isolation
are also allowed by the new superposed system, and any
properties of the underlying program that mention only
underlying variables and were proven only from the text
of the underlying program are preserved. The augmented
statements may be used to keep histories of the underlying
variables or to present an abstraction of the underlying
system as a service to some higher layer environment.
UNITY superposition is an excellent example of how synchronization
can be used as part of a design methodology
for distributed systems. It also shows an important distinction
between our notion of synchronization, which is
the construction of new, atomic statements from two or
more simpler atomic statements by executing them in par-
allel, and the notion of synchronous computing which is
a system model characterized by bounded communication
and computation delays [26]. While the latter is a very
important component of our current understanding of distributed
systems, and in many circumstances is perhaps a
prerequisite to the implementation of the former, it is not
our focus here. Rather, we examine mechanisms that allow
us to compose programs and to combine a group of statements
into a new one through parallel execution. This idea
of statement co-execution was inspired by UNITY superposition
However, superposition is limited in two important ways.
First, a superposed system is statically defined and synchronization
relationships are fixed throughout the execution
of the system. Continuing the theme of modeling
mobility with a kind of transient program composition,
we would like the ability to specify dynamically changing
and location dependent forms of synchronization where the
participants may enter into and leave synchronization relationships
as the computation evolves. Static forms of
statement synchronization are more limited as discussed
in [27]. Second, superposition is an asymmetric relationship
that subsumes one program to another and disallows
any communication from the superposed to the underlying
program. While this is the source of the strong formal results
about program properties, such a restriction may not
be appropriate in the mobile computing domain where two
programs may desire to make use of each other's services
and carry out bi-directional communication while making
use of some abstraction for synchronization.
Inspired by UNITY superposition, which combines statements
into new atomic actions, we will now explore some
synchronization mechanisms for the mobile computing do-
main. These take the form of coordination constructs involving
statements from each of two separate programs.
Informally, the idea is to allow the programmer to specify
that the two statements should be combined into one
atomic action when a given condition is true. For exam-
ple, consider two programs A and B , where A contains
the integer x and B contains the integer y . Assume there
is a statement named increment in each program, where
A.increment is
increment
and B.increment is
increment :: y
Let us assume the programs are mobile, so each contains
a variable -, and that they can communicate only when
co-located. Also, assume that the counters represent some
value that must be incremented simultaneously when the
two hosts are together. We might use the following notation
in the Interactions section to specify this coordination
A.increment ed B.increment when
Note that this does not prohibit the statements from
executing independently when the programs are not co-
located. If the correctness criteria state that the counters
must remain synchronized at all times, we could add the
following two inhibit clauses to the Interactions section:
inhibit A.increment when (A.-
inhibit B.increment when (A.-
As distinct from standard UNITY superposition, the ed
construct is a mechanism for synchronizing pairs of statements
rather than specifying a transformation of an underlying
program. Also, the interaction is transient and
location dependent, instead of static and fixed throughout
system execution.
To reason formally about transient statement synchro-
nization, we must express it using lower-level primitives.
The basic idea is that each statement should react to selection
of the other for execution, so that both are executed in
the same atomic step. We can accomplish this by separating
the selection of a statement from its actual execution,
and assume for example that a statement A.s is of the form:
A.s.driver :: hA.s phase := GO; A.s phase := IDLEi
A.s.actionkA.s f := false
reacts-to A.s phase
A.s f := true
reacts-to A.s phase = IDLE (5)
where A.s phase is an auxiliary variable that can hold a value
from the set fGO, IDLEg, and A.s.action is the actual assignment
that must take place. Note that A.s.action reacts
to a value of GO in A.s phase and that A.s f is simultaneously
set to false so that the action executes only once.
When A.s phase returns to IDLE, the flag A.s f is reset to
true so that the cycle can occur again. The non-reactive
statement A.s.driver will be selected fairly along with all
other non-reactive statements, and because A.s.action reacts
during this transaction, the net effect will be the same
as if A.s.action were listed as a simple non-reactive state-
ment. However, expressing the statement with the three
lines above gives us access to and control over key parts
of the statement selection and execution process. Most im-
portantly, we can provide for statement synchronization by
simply sharing the phase variable between two statements.
Assuming both statements are of the form given in Equation
5, we can define ed as:
A.s ed B.t when r \Delta
Then, whenever one of the statements is selected for execution
by executing A.s.driver or B.t.driver , the corresponding
phase variable will propagate to the other statement
and reactive execution of B.t.action or A.s.action will pro-
ceed. Also, transitive and multi-way sharing will give us
transitive and multi-way synchronization. Note that if we
wish to disable the participants from executing, as we did
with inhibit above, we must be sure to inhibit all participants
at the level of the named driver transactions. If we
inhibit only some of them, they may still fire reactively
if ed is used to synchronize them with statements that are
not inhibited. We call the ed operator coselection because
it represents simultaneous selection of both statements for
execution. When used in the Interactions section of a sys-
tem, it embodies the assumption that statement execution
is controlled by a phase variable, as in Equation 6.
The semantics of Equations 5 and 6 do not really guarantee
simultaneous execution of the statements in the same
sense as UNITY "k", but rather that the statements will
be executed in some interleaved order during R. In many
cases this will be equivalent to simultaneous execution because
neither statement will evaluate variables that were
assigned to by the other. However, there may be cases
when we desire both statements to evaluate their right-
hand-sides in the old state without using values that are
set by the other statement. For these cases, we can add
another computation phase to Equation 5 which models
the evaluation of right-hand sides as a separate step from
assignment to left-hand variables:
A.s.driver :: hA.s phase := LOAD;
A.s phase := STORE;
A.s phase := IDLEi
A.s.loadkA.s lf := false
reacts-to A.s phase = LOAD - A.s lf
A.s.storekA.s sf := false
reacts-to A.s phase = STORE - A.s sf
reacts-to A.s phase = IDLE (7)
Here the phase variables may hold values from the set
fLOAD, STORE, IDLEg and the original A.s.action is
split into two statements, one for evaluating and one for
assigning. A.s.load is assumed to evaluate the right-hand
side of A.s.action and store the results in some internal
variables that are not given explicitly here. A.s.store is
assumed to assign these values to the left-hand variables
of A.s.action . In this way, statements can still be synchronized
by sharing phase variables as in Equation 6, but
now all statements will evaluate right-hand sides during the
LOAD phase, will assign to left-hand variables during the
STORE phase, and will reset the two flags during the IDLE
phase. This prevents interference between any two synchronized
statements, even if the two are connected indirectly
through a long chain of synchronization relationships, and
even if variables assigned to by the statements are shared
indirectly. For example, we return to the increment example
and consider the following set of interactions:
A.increment ed B.increment when
Here the statements A.increment and B.increment are syn-
chronized, but the variable A.x is indirectly shared with
the variable B.y , via the intermediate variable C.z , when
the predicate r is true. If the increment statements are
of the form given in Equation 5, then changes to one variable
may inadvertently be used in computing the incremented
value of the other variable, which seems to violate
the intuitive semantics of simultaneous execution. In con-
trast, increment statements of the form given in Equation 7
have a separate LOAD phase for computing the right-hand
sides of assignment statements and shared variables are
not assigned to during this phase. Assignment to shared
variables, and the associated reactive propagation of those
values, is reserved until the STORE phase. This isolates
the assignment statements from one another and prevents
unwanted communication.
There may be situations, however, where we do wish the
two statements to communicate during synchronized exe-
cution. This strategy is central to models like CSP and
I/O Automata, where communication occurs along with
synchronized execution of statements. In CSP, a channel
is used to communicate a value from a single sender to a
single receiver. I/O automata can pass arbitrary parameters
from an output statement to all same-named input
MCCANN AND ROMAN: COMPOSITIONAL PROGRAMMING ABSTRACTIONS FOR MOBILE COMPUTING 13
statements. As an example, we will now give a construction
that expresses I/O Automata-style synchronization.
Recall that each automaton has a set of input actions, a
set of internal actions, and a set of output actions. The execution
of any action may modify the state of the machine
to which it belongs; in addition, the execution of any output
action takes place simultaneously with the execution
of all input actions of the same name in all other machines.
We can assume that output actions are of a form similar
to Equation 5, but with an important addition:
A.s.driver :: hA.s params := exp;
A.s phase := GO;
A.s phase := IDLEi
A.s.actionkA.s f := false
reacts-to A.s phase
A.s f := true
reacts-to A.s phase = IDLE (8)
We have added the assignment A.s params := exp as the
first statement of the transaction. This assignment models
binding of the output parameters to a list of auxiliary
variables A.s params . Here exp is assumed to be a vector
of expressions that may reference other program variables.
For example, assume for a moment that the function of
the A.increment statement from the earlier example is to
increment the variable by a value e which is a function
of the current state of A. Assume also that B.increment
must increment B.y by the same amount when the two
are co-located. This value could be modeled as a parameter
A.increment p of the synchronization, and A.increment
would then be of the form given in Equation 8:
A.increment.driver :: hincrement p := e;
increment phase := GO;
increment phase := IDLEi
x increment p kA.s f := false
reacts-to A.s phase
A.s f := true
reacts-to A.s phase = IDLE
And B.increment could also be of this form, with perhaps
a different expression e for the increment value. We
could then express the sharing of the parameter with the
interaction
A.increment
and the synchronization of the statements with
A.increment ed B.increment when
Thus, either statement may execute its driver transaction,
which in the first phase assigns a value to the parameter
which is propagated to the other component, in the second
phase triggers execution of both statements, and in the
third resets the flags associated with each statement.
We can easily make use of the guards on the statements
to specify many different and interesting forms of
synchronization with the use of appropriately tailored inhibit
clauses. For example, in addition to the definition of
coselection defined by Equations 5 and 6, we can specify a
notion of coexecution which has the added meaning that
when co-located, the statements may only execute when
both guards are enabled. This might be defined as
coexecute(A.s, B.t, r)
A.s phase - B.t phase when r
inhibit A.s.driver when r - :(A.s.guard - B.t.guard )
inhibit B.t.driver when r - :(A.s.guard - B.t.guard )
which still allows the statements to execute in isolation
when not co-located. In contrast, we might require that the
statements may not execute in isolation when disconnected.
We call this exclusive coexecution and it could be specified
as
xcoexecute(A.s, B.t, r)
A.s phase - B.t phase when r
inhibit A.s.driver when :(r - A.s.guard - B.t.guard )
inhibit B.t.driver when :(r - A.s.guard - B.t.guard )
A similar notion of exclusive coselection could be defined
if we ignore the guards on the statements
xcoselect(A.s, B.t, r)
A.s phase - B.t phase when r
inhibit A.s.driver when :r
inhibit B.t.driver when :r
Each of these constructions could be generalized to pass
parameters from a sender to a receiver. In general, if both
driver statements are of the form specified in Equation 8,
either statement may bind parameters and propagate them
to the other as long as the sharing specified is bi-directional
and the driver statement itself is not inhibited. Because the
semantics of a transaction mean that it will finish before
another is allowed to begin, there is no ambiguity about
which statement is currently executing and no conflict in
assigning parameters to the synchronized execution. Also,
any of the above could be used with statements of the form
in Equation 7 instead of Equation 5 to provide truly simultaneous
access instead of interleaved access to any other
shared variables that might be referenced by or assigned to
by the various actions.
In contrast to the symmetric forms of synchronization
considered so far, the coordination between actions in models
such as I/O Automata and CSP is asymmetric. In each
model, actions are divided into input and output classes;
parameters are passed from output actions to input actions.
The two models differ in the number of participants in a
synchronization. I/O Automata are capable of expressing
one-to-many synchronization styles, while CSP emphasizes
pairwise rendezvous of output actions with input actions.
All of these synchronization styles can be expressed in Mobile
UNITY with appropriate use of transactions, variable
sharing, and inhibitions. For example, one-to-many synchronization
with parameter passing can be simulated by
14 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. XX, NO. X, MONTH 1998
a quantified set of one-way variable sharing relationships,
where the output statement executes as a transaction and
the input statements are simply reactive. For rendezvous
style synchronization, the phase variable must be propagated
to at most one input action, which can be ensured by
a flag which is set by the first (nondeterministically chosen)
reactive statement and which prevents other propagations
from taking place. Of course, other aspects of CSP, such as
the dynamic creation and deletion of terms, could not be so
easily captured in a UNITY-style model because of fundamental
differences in the underlying approaches. Similarly,
the discussion of I/O Automata has assumed that it is acceptable
to model a whole set of IOA actions with one parameterized
UNITY action, which may not be appropriate
in every case. Even so, modeling the basic synchronization
mechanisms of the two models in Mobile UNITY can be a
useful exercise.
Our point in examining the many different forms of synchronization
is to show the versatility and broad applicability
of the model. Because the field of mobile computing
is so new, we cannot predict which high-level abstractions
will become dominant and gain acceptance in the research
community. However, we believe that the examples above
show that Mobile UNITY can at least formalize direct generalizations
to the mobile setting of existing mechanisms for
synchronous statement execution in models of non-mobile
concurrency, and there is good reason to believe it is capable
of expressing new constructs that may be proposed in
the future.
V. Discussion
The Mobile UNITY notation and logic presented in this
paper is the result of a careful reevaluation of the implications
of mobility on UNITY. We took as a starting point
the notion that mobile components should be modeled as
programs (by the explicit addition of an auxiliary variable
representing location), and that interactions between components
should be modeled as a form of dynamic program
composition (with the addition of coordination constructs).
The UNITY-style composition, including union and super-
position, led to a new set of basic programming constructs
amenable to a dynamic and mobile setting. Previous work
extended the UNITY proof logic to handle pairwise forms
of such interaction. This paper presented a more modular
and compositional construction of transient sharing and
synchronization that allowed for multi-party interactions
among components.
We applied these constructs to a very low-level communication
task in an attempt to show that the basic notation
is useful for realistic specifications involving discon-
nection. The seemingly very strong reactive semantics
matched well the need to express dynamically changing
side-effects of atomic actions. Finally, we explored the expressive
power of the new notation by examining new transient
forms of shared variables and synchronization, mostly
natural extensions of the comparable non-mobile abstractions
of interprocess communication-indeed others may
propose radically different communication abstractions for
mobile computing. The notation was able to express formally
all extensions that were considered and promises to
be a useful research tool for investigating whatever new abstractions
may appear. Plans for future work include the
application of Mobile UNITY to distributed databases with
consistency semantics, capable of continuing operation
in the presence of disconnection. These problems only
recently have received attention in the engineering and re-search
community, and formal reasoning has an important
role to play in communicating and understanding proposed
solutions as well as the assumptions made by each.
VI.
Acknowledgements
This paper is based upon work supported in part by
the National Science Foundation of the United States under
Grant Numbers CCR-9217751 and CCR-9624815. Any
opinions, findings, and conclusions or recommendations expressed
in this paper are those of the authors and do not
necessarily reflect the views of the National Science Foundation
--R
Parallel Program Design: A Foundation
"Formal derivation of concurrent pro- grams: An example from industry,"
"A calculus of mobile processes. I,"
"Foundations of actor semantics,"
Actors: A Model of Concurrent Computation in Distributed Systems
"An asynchronous model of locality, fail- ure, and process mobility,"
"Distributed processes and location failures,"
"Experience with disconnected operation in a mobile computing environment,"
"Managing update conflicts in Bayou, a weakly connected replicated storage system,"
"Mobisaic: An information system for a mobile wireless computing environment,"
"Context-aware computing applications,"
"IP mobility support,"
"Mobile UNITY coordination constructs applied to packet forwarding for mobile hosts,"
"An algebraic verification of a mobile network,"
"Specification and proof of an algorithm for location management for mobile communication devices,"
"Mobile UNITY: Reasoning and specification in mobile comput- ing,"
"A logic for concurrent programming: Safety,"
"A logic for concurrent programming: Progress,"
"An axiomatic basis for computer programming,"
"Impossibility of distributed consensus with one faulty process,"
"Anomalous behavior of synchronizer and arbiter circuits,"
"Communicating sequential processes,"
"An introduction to in- put/output automata,"
"Fundamentals of object-oriented specification and modeling of collective behaviors,"
"On the minimal synchronism needed for distributed consensus,"
"Dynamic synchrony among atomic actions,"
--TR
--CTR
Gruia-Catalin Roman , Amy L. Murphy, Rapid development of dependable applications over Ad hoc networks, ACM SIGSOFT Software Engineering Notes, v.25 n.1, p.77-78, Jan 2000
Gian Pietro Picco , Amy L. Murphy , Gruia-Catalin Roman, Developing mobile computing applications with LIME, Proceedings of the 22nd international conference on Software engineering, p.766-769, June 04-11, 2000, Limerick, Ireland
Gruia-Catalin Roman , Christine Julien , Jamie Payton, Modeling adaptive behaviors in Context UNITY, Theoretical Computer Science, v.376 n.3, p.185-204, May, 2007
Camelia Zlatea , Tzilla Elrad, A design methodology for mobile distributed applications based on UNITY formalism and communication-closed layering, Proceedings of the eighteenth annual ACM symposium on Principles of distributed computing, p.284, May 04-06, 1999, Atlanta, Georgia, United States
Cecilia Mascolo , Gian Pietro Picco , Gruia-Catalin Roman, A fine-grained model for code mobility, ACM SIGSOFT Software Engineering Notes, v.24 n.6, p.39-56, Nov. 1999
A. L. Murphy , G.-C. Roman , G. Varghese, An Exercise in Formal Reasoning about Mobile Communications, Proceedings of the 9th international workshop on Software specification and design, p.25, April 16-18, 1998
Gian Pietro Picco , Amy L. Murphy , Gruia-Catalin Roman, LIME: Linda meets mobility, Proceedings of the 21st international conference on Software engineering, p.368-377, May 16-22, 1999, Los Angeles, California, United States
Gruia-Catalin Roman , Peter J. McCann, A Notation and Logic for Mobile Computing, Formal Methods in System Design, v.20 n.1, p.47-68, January 2002
Guido Menkhaus, An Architecture for Supporting Multi-Device, Client-Adaptive Services, Annals of Software Engineering, v.13 n.1-4, p.309-327, June 2002
Tore Fjellheim , Stephen Milliner , Marlon Dumas , Julien Vayssire, A process-based methodology for designing event-based mobile composite applications, Data & Knowledge Engineering, v.61 n.1, p.6-22, April, 2007
Cecilia Mascolo , Gian Pietro Picco , Gruia-Catalin Roman, CODEWEAVE: Exploring Fine-Grained Mobility of Code, Automated Software Engineering, v.11 n.3, p.207-243, June 2004
S. K. S. Gupta , P. K. Srimani, Adaptive Core Selection and Migration Method for Multicast Routing in Mobile Ad Hoc Networks, IEEE Transactions on Parallel and Distributed Systems, v.14 n.1, p.27-38, January
Gruia-Catalin Roman , Jamie Payton, A principled exploration of coordination models, Theoretical Computer Science, v.336 n.2-3, p.367-401, 26 May 2005
Peter J. McCann , Gruia-Catalin Roman, Modeling mobile IP in mobile UNITY, ACM Transactions on Software Engineering and Methodology (TOSEM), v.8 n.2, p.115-146, April 1999
Ichiro Satoh, Building and Selecting Mobile Agents for Network Management, Journal of Network and Systems Management, v.14 n.1, p.147-169, March 2006
Gianluigi Ferrari , C. Montangero , L. Semini , S. Semprini, Mark, a Reasoning Kit for Mobility, Automated Software Engineering, v.9 n.2, p.137-150, April 2002
Michel Wermelinger , Jos Luiz Fiadeiro, A graph transformation approach to software architecture reconfiguration, Science of Computer Programming, v.44 n.2, p.133-155, August 2002
Gian Pietro Picco , Gruia-Catalin Roman , Peter J. McCann, Reasoning about code mobility with mobile UNITY, ACM Transactions on Software Engineering and Methodology (TOSEM), v.10 n.3, p.338-395, July 2001
Michel Wermelinger , Jos Luiz Fiadeiro, Algebraic software architecture reconfiguration, ACM SIGSOFT Software Engineering Notes, v.24 n.6, p.393-409, Nov. 1999
Wolfgang Emmerich , Cecilia Mascolo , Anthony Finkelstein, Implementing incremental code migration with XML, Proceedings of the 22nd international conference on Software engineering, p.397-406, June 04-11, 2000, Limerick, Ireland
Amy L. Murphy , Gian Pietro Picco , Gruia-Catalin Roman, LIME: A coordination model and middleware supporting mobility of hosts and agents, ACM Transactions on Software Engineering and Methodology (TOSEM), v.15 n.3, p.279-328, July 2006
Dianxiang Xu , Jianwen Yin , Yi Deng , Junhua Ding, A Formal Architectural Model for Logical Agent Mobility, IEEE Transactions on Software Engineering, v.29 n.1, p.31-45, January
P. Ciancarini , F. Franz , C. Mascolo, Using a coordination language to specify and analyze systems containing mobile components, ACM Transactions on Software Engineering and Methodology (TOSEM), v.9 n.2, p.167-198, April 2000
Gruia-Catalin Roman , Gian Pietro Picco , Amy L. Murphy, Software engineering for mobility: a roadmap, Proceedings of the Conference on The Future of Software Engineering, p.241-258, June 04-11, 2000, Limerick, Ireland | shared variables;formal methods;mobile computing;Mobile UNITY;transient interactions;synchronization;weak consistency |
631246 | Incremental Design of a Power Transformer Station Controller Using a Controller Synthesis Methodology. | AbstractIn this paper, we describe the incremental specification of a power transformer station controller using a controller synthesis methodology. We specify the main requirements as simple properties, named control objectives, that the controlled plant has to satisfy. Then, using algebraic techniques, the controller is automatically derived from this set of control objectives. In our case, the plant is specified at a high level, using the data-flow synchronous Signal language, and then by its logical abstraction, named polynomial dynamical system. The control objectives are specified as invariance, reachability, ... properties, as well as partial order relations to be checked by the plant. The control objectives equations are synthesized using algebraic transformations. | Introduction
Motivations
The Signal language [8] is developed for precise specification of real-time reactive
systems [2]. In such systems, requirements are usually checked a posteriori
using property verification and/or simulation techniques. Control theory of Discrete
Event Systems (DES) allows to use constructive methods, that ensure, a
priori, required properties of the system behavior. The validation phase is then
reduced to properties that are not guaranteed by the programming process.
There exist different theories for control of Discrete Event Systems since the
80's [14, 1, 5, 13]. Here, we choose to specify the plant in Signal and the control
synthesis as well as verification are performed on a logical abstraction of this
program, called a polynomial dynamical system (PDS) over Z= 3Z . The control
This work was partially supported by lectricit de France (EDF) under contract
number M64/7C8321/E5/11 and by the Esprit SYRF project 22703.
of the plant is performed by restricting the controllable input values with respect
to the control objectives (logical or optimal). These restrictions are obtained
by incorporating new algebraic equations into the initial system. The theory of
PDS uses classical tools in algebraic geometry, such as ideals, varieties and mor-
phisms. This theory sets the basis for the verification and the formal calculus
tool, Sigali built around the Signal environment. Sigali manipulates the system
of equations instead of the sets of solutions, avoiding the enumeration of the
state space. This abstract level avoids a particular choice of set implementations,
such as BDDs, even if all operations are actually based on this representation
for sets.
Fig. 1. Description of the tool
The methodology is the following (see Figure 1). The user first specifies in
Signal both the physical model and the control/verification objectives to be
ensured/checked. The Signal compiler translates the Signal program into a
PDS, and the control/verification objectives in terms of polynomial relation-
s/operations. The controller is then synthesized using Sigali. The result is a
controller coded by a polynomial and then by a Binary Decision Diagram.
To illustrate our approach, we consider in this paper the application to the
specification of the automatic control system of a power transformer station.
It concerns the response to electric faults on the lines traversing it. It involves
complex interactions between communicating automata, interruption and pre-emption
behaviors, timers and timeouts, reactivity to external events, among
others. The functionality of the controller is to handle the power interruption,
the redirection of supply sources, and the re-establishment of the power following
an interruption. The objective is twofold: the safety of material and uninterrupted
best service. The safety of material can be achieved by (automatic) triggering
circuit-breakers when an electric fault occurs on lines, whereas the best quality
service can be achieved by minimizing the number of costumers concerned by
a power cut, and re-establishment of the current as quickly as possible for the
customers hit by the fault (i.e, minimizing the failure in the distribution of power
in terms of duration and size of the interrupted sub-network).
2 Overview of the power transformer station
In this section, we make a brief description of the power transformer station
network as well as the various requirements the controller has to handle.
2.1 The power transformer station description
lectricit de France has hundreds of high voltage networks linked to production
and medium voltage networks connected to distribution. Each station consists of
one or more power transformer stations to which circuit-breakers are connected.
The purpose of an electric power transformer station is to lower the voltage so
that it can be distributed in urban centers to end-users. The kind of transformer
(see
Figure
we consider, receives high voltage lines, and feeds several medium
voltage lines to distribute power to end-users.
Fig. 2. The power transformer station topology.
For each high voltage line, a transformer lowers the voltage. During operation
of this system, several faults can occur (three types of electric faults are
considered: phase PH, homopolar H, or wattmetric W), due to causes internal or
external to the station. To protect the device and the environment, several circuit
breakers are placed in a network of cells in different parts of the station
(on the arrival lines, link lines, and departure lines). These circuit breakers are
informed about the possible presence of faults by sensors.
Power and Fault Propagation: We discuss here some physical properties of
the power network located inside the power transformer station controller. It is
obvious that the power can be seen by the different cells if and only if all the
upstream circuit-breakers are closed. Consequently, if the link circuit-breaker is
opened, the power is cut and no fault can be seen by the different cells of the
power transformer station. The visibility of the fault by the sensors of the cells
is less obvious. In fact, we have to consider two major properties:
On one hand, if a physical fault, considered as an input of our system, is
seen by the sensors of a cell, then all the downstream sensors are not able to
see some physical faults. In fact, the appearance of a fault at a certain level
(the departure level in Figure 3(a) for example) increases the voltage on the
downstream lines and masks all the other possible faults.
(a) The fault masking (b) The fault propagation
Fig. 3. The Fault properties
- On the other hand, if the sensors of a cell at a given level (for example
the sensors of one of the departure cells as illustrated in Figure 3(b)) are
informed about the presence of a fault, then all the upstream sensors (here
the sensors of the arrival cell) detect the same fault. Consequently, it is the
arrival cell that handle the fault.
2.2 The controller
The controller can be divided into two parts. The first part concerns the local
controllers (i.e., the cells). We chose to specify each local controller in Signal,
because they merge logical and numerical aspects. We give here only a brief
description of the behavior of the different cells (more details can be found in
[12, 7]). The other part concerns more general requirements to be checked by
the global controller of the power transformer station. That specification will be
described in the following.
The Cells: Each circuit breaker controller (or cell) defines a behavior beginning
with the confirmation and identification of the type of the fault. In fact, a variety
of faults are transient, i.e., they occur only for a very short time. Since their
duration is so short that they do not cause any danger, the operation of the
circuit-breaker is inhibited. The purpose of this confirmation phase is let the
transient faults disappear spontaneously. If the fault is confirmed, the handling
consists in opening the circuit-breaker during a given delay for a certain number
of periods and then closing it again. The circuit-breaker is opened in consecutive
cycles with an increased duration. At the end of each cycle, if the fault is still
present, the circuit-breaker is reopened. Finally, in case the fault is still present
at the end of the last cycle, the circuit-breaker is opened definitively, and control
is given to the remote operator.
The specification of a large part of these local controllers has been performed
using the Signal synchronous language [12] and verified using our formal calculus
system, named Sigali [7].
Some global requirements for the controller: Even if is quite easy to
specify the local controllers in Signal, some other requirements are too informal,
or their behaviors are too complex to be expressed directly as programs.
1. One of the most significant problems concerns the appearance of two faults
(the kind of faults is not important here) at two different departure cells, at
the same time. Double faults are very dangerous, because they imply high
defective currents. At the place of the fault, this results in a dangerous path
voltage that can electrocute people or cause heavy material damages. The
detection of these double faults must be performed as fast as possible as well
as the handling of one of the faults.
2. Another important aspect is to know which of the circuit breakers must be
opened. If the fault appears on the departure line, it is possible to open the
circuit breaker at departure level, at link level, or at arrival level. Obviously,
it is in the interest of users that the circuit be broken at the departure level,
and not at a higher level, so that the fewest users are deprived of power.
3. We also have to take into account the importance of the departure circuit-
breaker. Assume that some departure line, involved in a double faults prob-
lem, supplies a hospital. Then, if the double faults occur, the controller
should not open this circuit-breaker, since electricity must always delivered
to a hospital.
The transformer station network as well as the cells are specified in Signal. In
order to take into account the requirements (1), (2) and (3), with the purpose of
obtaining an optimal controller, we rely on automatic controller synthesis that
is performed on the logical abstraction of the global system (network cells).
3 The Signal equational data flow real-time language
Signal [8] is built around a minimal kernel of operators. It manipulates signals
X, which denote unbounded series of typed values indexed by time t in
a time domain T . An associated clock determines the set of instants at which
values are present. A particular type of signals called event is characterized
only by its presence, and has always the value true (hence, its negation by not
is always false). The clock of a signal X is obtained by applying the operator
event X. The constructs of the language can be used in an equational style to
specify the relations between signals i.e. , between their values and between their
clocks. Systems of equations on signals are built using a composition construct,
thus defining processes. Data flow applications are activities executed over a set
of instants in time. At each instant, input data is acquired from the execution
environment; output values are produced according to the system of equations
considered as a network of operations.
3.1 The Signal language.
The kernel of the Signal language is based on four operations, defining primitive
processes or equations, and a composition operation to build more elaborate
processes in the form of systems of equations.
Functions are instantaneous transformations on the data. The definition of a
signal Y t by the function f : 8t; Y
Xng. Y, are required to have the same clock.
Selection of a signal X according to a boolean condition C is: Y := X when C.
If C is present and true, then Y has the presence and value of X. The clock of Y
is the intersection of that of X and that of C at the value true.
Deterministic merge noted: Z := X default Y has the value of X when it is
present, or otherwise that of Y if it is present and X is not. Its clock is the union
of that of X and that of Y.
Delay gives access to past values of a signal. E.g., the equation ZX
with initial value V 0 defines a dynamic process. It is encoded by: ZX := X$1 with
initialization ZX init V0. X and ZX have equal clocks.
Composition of processes is noted "-" (for processes P 1 and P 2 , with paren-
It consists in the composition of the systems of e-
quations; it is associative and commutative. It can be interpreted as parallelism
between processes.
The following table illustrates each of the primitives with a trace:
Derived features: Derived processes have been defined on the base of
the primitive operators, providing programming comfort. E.g., the instruction
specifies that signals X and Y are synchronous (i.e., have equal clocks);
when B gives the clock of true-valued occurrences of B.
For a more detailed description of the language, its semantic, and applica-
tions, the reader is referred to [8]. The complete programming environment also
features a block-diagram oriented graphical user interface and a proof system
for dynamic properties of Signal programs, called Sigali (see Section 4).
3.2 Specification in Signal of the power transformer station
The transformer station network we are considering contains four departure, t-
wo arrival and one link circuit-breakers as well as the cells that control each
circuit-breaker [7]. The process Physical Model in Figure 4 describes the
power and fault propagation according to the state of the different circuit-
breakers. It is composed of nine subprocesses. The process Power Propagation
describes the propagation of power according to the state of the circuit-breakers
(Open/Closed). The process Fault Visibility describes the fault propagation
and visibility according to the other faults that are potentially present. The
remaining seven processes encode the different circuit-breakers.
Fig. 4. The main process in Signal
The inputs of this main process are booleans that encode the physical fault-
s: Fault Link M, Fault Arr i M (i=1,2), Fault Dep j M (j =1,.,4). They encode
faults that are really present on the different lines. The event inputs
close . and req open . indicate opening and closing requests of
the various circuit-breakers. The outputs of the main process are the booleans
Fault Link, Fault Arr i, Fault Dep j, representing the signals that are sent
to the different cells. They indicate whether a cell is faulty or not. These outputs
represents the knowledge that the sensors of the different cells have.
We will now see how the subprocesses are specified in Signal.
The circuit-breaker: A circuit-breaker is specified in Signal as follows: The
process Circuit-Breaker takes two sensors inputs: Req Open and Req Close.
They represent opening and closing requests. The output Close represents the
status of the circuit-breaker.
(- Close := (ReqClose default (false when ReqOpen) default ZClose
ZClose := Close $1 init true
Close "= Tick
(ReqClose when ReqOpen) "= when (not ReqOpen) -)
Fig. 5. The Circuit-breaker in Signal
The boolean Close becomes true when the process receives the event
close, and false when it receives the event Req open, otherwise it is equal to
its last value (i.e. Close is true when the circuit-breaker is closed and false other-
wise). The constraint Req Close when Req Open "= when not Req Close says
that the two events Req Close and Req Open are exclusive.
Power Propagation: It is a filter process using the state of the circuit-breakers.
Power propagation also induces a visibility of possible faults. If a circuit-breaker
is open then no fault can be detected by the sensors of downstream cells.
Fig. 6. specification in Signal of the power propagation
This is specified in the process Power Propagation shown in Figure 6.
The inputs are booleans that code the physical faults and the status of
the circuit-breakers. For example, a fault could be detected by the sensor
of the departure cell 1 (i.e. Fault Dep 1 E is true) if there exists a physical
if the upstream circuit-breakers are closed (ie,
Close Link=true and Close Arr 1=true and Close Dep 1=true).
Fault visibility and propagation: The Fault Visibility process in Figure
7, specifies fault visibility and propagation. As we explained in Section 2.1, a
fault could be seen by the sensors of a cell only if no upstream fault is present.
Fig. 7. Specification in Signal of the fault propagation and visibility
For example, a fault cannot be detected by the sensor of the departure
cell 1 (i.e. Fault Dep 1 is false), even if a physical fault exists at this level
exists at the link level
K=true) or at the arrival level 1 It is
thus, true just when the departure cell 1 detects a physical fault E)
and no upstream fault exists. A contrario, if a fault is picked up by a cell,
then it is also picked up by the upstream cells. This is for example the meaning
of Fault Link := (when
Fault link K.
4 Verification of Signal programs
The Signal environment contains a verification and controller synthesis tool-
box, Sigali. This tool allows us to prove the correctness of the dynamical behavior
of the system. The equational nature of the Signal language leads to the
use of polynomial dynamical equation systems (PDS) over Z= 3Z , i.e. integers
modulo 3: f-1,0,1g, as a formal model of program behavior. The theory of PDS
uses classical concepts of algebraic geometry, such as ideals, varieties and co-
morphisms [6]. The techniques consist in manipulating the system of equations
instead of the sets of solutions, which avoids enumerating state spaces.
To model its behavior, a Signal process is translated into a system of polynomial
equations over Z= 3Z [7]. The three possible states of a boolean signal X
(i.e. , present and true, present and false, or absent) are coded in a signal variable
x by (present and true ! 1, present and false ! \Gamma1, and absent ! 0). For the
non-boolean signals, we only code the fact that the signal is present or absent:
(present
Each of the primitive processes of Signal are then encoded as polynomial
equations. Let us just consider the example of the selection operator. C := A
when B means "if 0". It can be rewritten as a
polynomial Indeed, the solutions of this equation are
the set of possible behaviors of the primitive process when. For example, if the
signal B is true (i.e. , b=1), then
to
1 Note that this fault has already be filtered. It can only be present if all the upstream
circuit-breakers are closed
The delay $, which is dynamical, is different because it requires memorizing
the past value of the signal into a state variable x. In order to encode
we have to introduce the three following equations:
where x 0 is the value of the memory at the next instant. Equation (1) describes
what will be the next value x 0 of the state variable. If a is present, x 0 is equal to a
(because is equal to the last value of a, memorized by
x. Equation (2) gives to b the last value of a (i.e. the value of x) and constrains
the clocks b and a to be equal. Equation (3) corresponds to the initial value of
x, which is the initial value of b.
Table
shows how all the primitive operators are translated into polynomial
equations. Remark that for the non boolean expressions, we just translate the
synchronization between the signals.
Boolean expressions
B := not A
C := A or B
non-boolean expressions
Table
1. Translation of the primitive operators.
Any Signal specification can be translated into a set of equations called
polynomial dynamical system (PDS), that can be reorganized as follows:
(1)
are vectors of variables in Z= 3Z and
components of the vectors X and X 0 represent the states of the system and are
called state variables. They come from the translation of the delay operator. Y
is a vector of variables in Z= 3Z , called event variables. The first equation is the
state transition equation; the second equation is called the constraint equation
and specifies which events may occur in a given state; the last equation gives
the initial states. The behavior of such a PDS is the following: at each instant t,
given a state x t and an admissible y t , such that Q(x t ; y t the system evolves
into state x
Verification of a Signal program: We now explain how verification of a
Signal program (in fact, the corresponding PDS) can be carried out. Using
algebraic operations, it is possible to check properties such as invariance, reachability
and attractivity [7]. Note that most of them will be used in the sequel as
control objectives for controller synthesis purposes. We just give here the basic
definitions of each of this properties.
Definition 1. 1. A set of states E is invariant for a dynamical system if for
every x in E and every y admissible in x, P (x; y) is still in E.
2. A subset F of states is reachable if and only if for every state x 2 F there
exists a trajectory starting from the initial states that reaches x.
3. A subset F of states is attractive from a set of states E if and only if every
state trajectory initialized in E reaches F . ffl
For a more complete review of the theoretical foundation of this approach, the
reader may refer to [6, 7].
Specification of a property: Using an extension of the Signal language,
named Signal+, it is possible to express the properties to be checked, as well as
the control objectives to be synthesized (see section 5.2), in the Signal program.
The syntax is
The keyword Sigali means that the subexpression has to be evaluated by Si-
gali. The function Verif Objective (it could be invariance, reachability,
attractivity, etc) means that Sigali has to check the corresponding property
according to the boolean PROP, which defines a set of states in the corresponding
PDS. The complete Signal program is obtained composing the process specifying
the plant and the one specifying the verification objectives in parallel. Thus,
the compiler produces a file which contains the polynomial dynamical system
resulting from the abstraction of the complete Signal program and the algebraic
verification objectives. This file is then interpreted by Sigali. Suppose that,
for example, we want, in a Signal program named "system", to check the attractivity
of the set of states where the boolean PROP is true. The corresponding
Signal+ program is then:
(- system() (the physical model specified in Signal)
definition of the boolean PROP in Signal
The corresponding Sigali file, obtained after compilation of the Signal pro-
gram, is:
read("system.z3z"); =? loading of the PDS
Compute the states where PROP is true
=? Check for the attractivity of SetStates from the initial states
The file "system.z3z" contains in a coded form the polynomial dynamical system
that represents the system. Set States is a polynomial that is equal to 0
when the boolean PROP is true. The methods consist in verifying that the set of
states where the polynomial Set States takes the value 0 is attractive from the
initial states (the answer is then true or false): Attractivity(S, Set States).f
This file is then interpreted by Sigali that checks the verification objective.
4.1 Verification of the power transformer network
In this section, we apply the tools to check various properties of our Signal
implementation of the transformer station. After the translation of the Signal
program, we obtain a PDS with 60 state variables and 35 event variables.
Note that the compiler also checks the causal and temporal concurrency of our
program and produces an executable code. We will now describe some of the
different properties, which have been proved.
(1) "There is no possibility to have a fault at the departure, arrival and link
level when the link circuit-breaker is opened." In order to check this property, we
add to the original specification the following code
(- or FaultArr1 or FaultArr1 or
FaultDep1 or FaultDep2 or FaultDep3 or FaultDep4)
when OpenLink) default false
The Error signal is a boolean which takes the value true when the property is
violated. In order to prove the property, we have to check that there does not
exist any trajectory of the system which leads to the states where the Error
signal is true (Reachable(True(Error))). The produced file is interpreted by
Sigali that checks whether this set of states is reachable or not. In this case,
the result is false, which means that the boolean Error never takes the value
true. The property is satisfied 2 . In the same way, we proved similar properties
when one of the arrival or departure circuit-breakers is open.
(2) "If there exists a physical fault at the link level and if this fault is picked
up by its sensor then the arrival sensors can not detect a fault". We show here
the property for the arrival cell 1. It can be expressed as an invariance of a set
of states.
(-
2 Alternatively, this property could be also expressed as the invariance of the boolean
False(Error), namely Sigali(Invariance(False(Error))).
We have proved similar properties for a departure fault as well as when a physical
fault appears at the arrival level and at the departure level at the same time.
(3) We also proved using the same methods the following property: "If a
fault occurs at a departure level, then it is automatically seen by the upstream
sensors when no other fault exists at a higher level."
All the important properties of the transformer station network have been
proved in this way. Note that the cell behaviors have also been proved (see [7]
for more details).
5 The automatic controller synthesis methodology
5.1 Controllable polynomial dynamical system
Before speaking about control of polynomial dynamical systems, we first need to
introduce a distinction between the events. From now on, we distinguish between
the uncontrollable events which are sent by the system to the controller, and the
controllable events which are sent by the controller to the system.
A polynomial dynamical system S is now written as:
(2)
where the vector X represents the state variables; Y and U are respectively the
set of uncontrollable and controllable event variables. Such a system is called a
controllable polynomial dynamic system. Let n, m, and p be the respective dimensions
of X , Y , and U . The trajectories of a controllable system are sequences
and x include an uncontrollable component
y t and a controllable one u t 3 . We have no direct influence on the y t part
which depends only on the state x t , but we observe it. On the other hand, we
have full control over u t and we can choose any value of u t which is admissible,
i.e. , such that Q(x t To distinguish the two components, a vector
called an event and a vector u 2 (Z= 3Z ) p a control . From now
on, an event y is admissible in a state x if there exists a control u such that
such a control is said compatible with y in x.
The controllers: A PDS can be controlled by first selecting a particular initial
state x 0 and then by choosing suitable values for We will
here consider control policies where the value of the control u t is instantaneously
computed from the value of x t and y t . Such a controller is called a static controller
. It is a system of two equations:
3 This particular aspect constitutes one of the main differences with [14]. In our case,
the events are partially controllable, whereas in the other case, the events are either
controllable or uncontrollable.
the equation C determines initial states satisfying the control objectives
and the other one describes how to choose the instantaneous controls; when the
controlled system is in state x, and when an event y occurs, any value u such
that Q(x; can be chosen. The behavior of the system
composed with the controller is then modeled by the system S c :
However, not every controller (C; CO ) is acceptable. First, the controlled system
SC has to be initialized ; thus, the equations Q must
have common solutions. Furthermore, due to the uncontrollability of the events
Y , any event that the system S can produce must be admissible by the controlled
system SC . Such a controller is said to be acceptable.
5.2 Traditional Control Objectives
We now illustrate the use of the framework for solving a traditional control
synthesis problem we shall reuse in the sequel.
Suppose we want to ensure the invariance of a set of states E. Let us introduce
the operator
pre, defined by: for any set of states F ,
pre
Consider now the sequence
pre (E) (4)
The sequence (4) is decreasing. Since all sets E i are finite, there exists a j such
that . The set E j is then the greatest control-invariant subset of
j be the polynomial that has E j as solution, then C
is an admissible feed-back controller and the system SC :
verifies the invariance of the set of states E.
Using similar methods, we are also able to to compute controllers (C; C 0 )
that ensure
- the reachability of a set of states from the initial states of the system,
- the attractivity of a set of states E from a set of states F .
- the recurrence of a set of states E.
We can also consider control objectives that are conjunctions of basic properties
of state trajectories. However, basic properties cannot, in general, be combined
in a modular way. For example, an invariance property puts restrictions on the
4 the solutions of the polynomial P (g) are the triples (x; that satisfy the relation
is solution of the polynomial g".
set of state trajectories which may be not compatible with an attractivity prop-
erty. The synthesis of a controller insuring both properties must be effected by
considering both properties simultaneously and not by combining a controller
insuring safety with a controller insuring attractivity independently. For more
details on the way controllers are synthesized, the reader may refer to [4].
Specification of the control objectives: As for verification (Section 4),
the control objectives can be directly specified in Signal+ program, using
the key-word Sigali. For example, if we add in the Signal program the line
Sigali(S Attractivity(S,PROP)), the compiler produces a file that is interpreted
by Sigali which computes the controller with respect to the control
objective. In this particular case, the controller will ensure the attractivity of
the set of states Set States, where Set States is a polynomial that is equal to
zero when the boolean PROP is true. The result of the controller synthesis is a
polynomial that is represented by a Binary Decision Diagram (BDD). This BDD
is then saved in a file that could be used to perform a simulation [9].
Application to the transformer station: We have seen in the previous sec-
tion, that one of the most critical requirements concerns the double fault prob-
lem. We assume here that the circuit-breakers are ideal, i.e. they immediately
react to actuators (i.e. , when a circuit-breaker receives an opening/closing re-
quest, then at the next instant the circuit-breaker is opened/closed). With this
assumption, the double fault problem can be rephrased as follows:
"if two faults are picked up at the same time by two different departure cells,
then at the next instant, one of the two faults (or both) must disappear."
In order to synthesize the controller, we assume that the only controllable
events are the opening and closing requests of the different circuit-breakers. The
other events concern the appearance of the faults and cannot be considered
controllable. The specification of the control objective is then:
(- 2Fault := when (FaultDep1 and FaultDep2)
default when (FaultDep1 and FaultDep3)
default when (FaultDep1 and FaultDep4)
default when (FaultDep2 and FaultDep3)
default when (FaultDep2 and FaultDep4)
default when (FaultDep3 and FaultDep4) default false
The boolean 2 Fault is true, when two faults are present at the same time
and is false otherwise. The boolean Error is true when two faults are present
at two consecutive instants. We then ask Sigali to compute a controller that
forces the boolean Error to be always false (i.e., whatever the behavior, there
is no possibility for the controlled system to reach a state where Error is true).
The Signal compiler translates the Signal program into a PDS, and the
control objectives in terms of polynomial relations and polynomial operations.
Applying the algorithm, described by the fixed-point computation (4), we are
able to synthesize a controller (C ensures the invariance of the set
of states where the boolean Error is true, for the controlled system
The result is a controller coded by a polynomial and a BDD.
Using the controller synthesis methodology, we solved the double fault prob-
lem. However, some requirements have not been taken into account (importance
of the lines, of the circuit-breakers,. This kind of requirements cannot be
solved using traditional control objectives such as invariance, reachability or at-
tractivity. In the next section, we will handle this kind of requirements, using
control objectives expressed as order relations.
5.3 Numerical Order Relation Control Problem
We now present the synthesis of control objectives that considers the way to
reach a given logical goal. This kind of control objectives will be useful in the
sequel to express some properties of the power transformer station controller, as
the one dealing with the importance of the different circuit-breakers. For this
purpose we introduce cost functions on states. Intuitively speaking, the cost
function is used to express priority between the different states that a system
can reach in one transition. Let S be a PDS as the one described by (2). Let us
suppose that the system evolves into a state x, and that y is an admissible event
at x. As the system is generally not deterministic, it may have several controls
u such that Q(x; be two controls compatible with y in
x. The system can evolve into either x Our
goal is to synthesize a controller that will choose between u 1 and u 2 , in such
a way that the system evolves into either x 1 or x 2 according to a given choice
criterion. In the sequel, we express this criterion as a cost function relation.
Controller synthesis method: Let be the state variables
of the system. Then, a cost function is a map from (Z= 3Z ) n to N, which associates
to each x of (Z= 3Z ) n some integer k.
Definition 2. Given a PDS S and a cost function c over the states of this
system, a state x 1 is said to be c-better than a state x 2 (denoted x 1 c x 2 ), if
and only if, c(x 2
In order to express the corresponding order relation as a polynomial relation,
let us consider The following sets of states are then
computed ig: The sets
partition of the global set of states. Note that some A i could be reduced to the
empty set. The proof of the following property is straightforward:
Proposition 1. x
kmax be the polynomials that have the sets A solutions
5 . The order relation c defined by the proposition 1 can be expressed as
polynomial relation:
Corollary 1. x c x 0 , Rc (x; x
Y
Y
As we deal with a non strict order relation, from c , we construct a strict order
relation, named c defined as: x c x 0 , fx c x 0 "q(x 0 c x)g. Its translation
in terms of polynomial equation is then given by:
We now are interested in the direct control policy we want to be adopted by the
system; i.e. , how to choose the right control when the system S has evolved
into a state x and an uncontrollable event y has occurred.
Definition 3. A control u 1 is said to be better compared to a control u 2 , if and
only if x Using the polynomial approach, it
gives Rc (P (x;
In other words, the controller has to choose, for a pair (x; y), a compatible
control with y in x, that allows the system to evolve into one of the states that
are maximal for the relation Rc . To do so, let us introduce a new order relation
A c defined from the order relation c .
In other words, a triple (x; y; u) is "better" than a triple (x;
the state P (x; reached by choosing the control u is better than the state
reached by choosing the control u 0 .
We will now compute the maximal triples of this new order relation among all
of the triples. To this effect, we use I = f(x;
0g the set of admissible triples (x; u). The maximal set of triples I max is then
provided by the following relation:
I
The characterization of the set of states I max in terms of polynomials is the
5 To compute efficiently such polynomials, it is important to use the Arithmetic Decision
Diagrams (ADD) developed, for example, by [3].
Proposition 2. The polynomial C that has I max as solutions is given by:
where the solutions of 9elimU 0
are given by the set
Using this controller, the choice of a control u, compatible with y in x, is reduced
such that the possible successor state is maximal for the (partial) order relation
c . Note that if a triple (x; y; u) is not comparable with the maximal element
of the order relation A c , the control u is allowed by the controller (i.e. , u is
compatible with the event y in the state x).
Without control, the system can start from one of the initial states of I
0g. To determine the new initial states of the system, we will
take the ones that are the maximal states (for the order relation Rc ) among
all the solutions of the equation Q This computation is performed by
removing from I 0 all the states for which there exist at least one smaller state
for the strict order relation c . Using the same method as the one previously
described for the computation of the polynomial C, we obtain a polynomial C 0 .
The solutions of this polynomial are the states that are maximal for the order
relation A c .
Theorem 1. With the preceding notations, (C; C 0 ) is an acceptable controller
for the system S. Moreover, the controlled system adopts the
control policy of Definition 3. ffi
Some others characterization of order relations in terms of polynomials can be
found in [11]. Finally, note that the notion of numerical order relation has been
generalized over a bounded states trajectory of the system, retrieving the classical
notion of Optimal Control [10].
Application to the power transformer station controller: We have seen
in Section 5.2 how to compute a controller that solves the double fault problem.
However, even if this particular problem is solved, other requirements had not
been taken into account. The first one is induced by the obtained controller
itself. Indeed, several solutions are available at each instant. For example, when
two faults appear at a given instant, the controller can choose to open all the
circuit-breakers, or at least the link circuit-breaker. This kind of solutions is not
admissible and must not be considered. The second requirements concerns the
importance of the lines. The first controller (C does not handle this kind
of problems and can force the system to open the bad circuit-breakers.
As consequences, two new requirements must be added in order to obtain a
real controller:
1. The number of opened circuit-breaker must be minimal
2. The importance of the lines (and of the circuit-breakers) has to be different.
These two requirements introduce a quantitative aspect to the control objectives.
We will now describe the solutions we proposed to cope with these problems.
First, let us assume that the state of a circuit-breaker is coded with a state
variable according to the following convention: the state variable i is equal to 1
if and only if the corresponding circuit-breaker i is closed. CB is then a vector
of state variables which collects all the state variables encoding the states of the
circuit-breakers. To minimize the number of open circuit-breaker and to take into
account the importance of the line, we use a cost function . We simply encode
the fact that the more important is the circuit-breaker, the larger is the cost
allocated to the state variable which encodes the circuit-breaker. The following
picture summarizes the way we allocate the cost.
The cost allocated to each state variable corresponds to the cost when the
corresponding circuit-breaker is opened. When it is closed, the cost is equal to
0. The cost of a global state is simply obtained by adding all the circuit-breaker
costs. With this cost function, it is always more expensive to open a circuit-
breaker at a certain level than to open all the downstream circuit-breakers.
Moreover, the cost allocated to the state variable that encodes the second departure
circuit-breaker (encoded by the state variable X dep2 )) is bigger than the
others because the corresponding line supplies a hospital (for example). Finally
note that the cost function is minimal when the number of open circuit-breaker
is minimal.
Let us consider the system SC1 . We then introduce an order relation over
the states of the system: a state x 1 is said to be better compared to a state
only if for their corresponding sub-vectors CB 1 and CB 2 ,
we have CB 1 w c CB 2 . This order relation is then translated in an algebraic
relation Rwc , following Equation (5) and by applying the construction described
in proposition 2 and 1, we obtain a controller (C
which the controlled
system respects the control strategy.
6 Conclusion
In this paper, we described the incremental specification of a power transformer
station controller using the control theory concepts of the class of polynomial
dynamical systems over Z= 3Z . As this model results from the translation of a
Signal program [8], we have a powerful environment to describe the model for
a synchronous data-flow system. Even if classical control can be used, we have
shown that using the algebraic framework, optimal control synthesis problem
is possible. The order relation controller synthesis technique can be used to
synthesize control objectives which relate more to the way to get to a logical
goal, than to the goal to be reached.
Acknowledgment
: The authors gratefully acknowledge relevant comments
from the anonymous reviewers of this paper.
--R
Supervisory control of a rapid thermal multiprocessor.
Verification of Arithmetic Functions with Binary Dia- grams
Control of polynomial dynamic systems: an exam- ple
A survey of Petri net methods for controlled discrete event systems.
Polynomial dynamical systems over finite fields.
Formal verification of signal programs: Application to a power transformer station controller.
Programming real-time applications with signal
A design environment for discrete-event controllers based on the Signal language
On the optimal control of polynomial dynamical systems over z/pz.
Partial order control of discrete event systems modeled as polynomial dynamical systems.
Synchronous design of a transformer station controller with Signal.
Controller synthesis for the production cell case study.
The control of discrete event systems.
--TR
--CTR
Kai-Yuan Cai , Yong-Chao Li , Wei-Yi Ning , W. Eric Wong , Hai Hu, Optimal and adaptive testing with cost constraints, Proceedings of the 2006 international workshop on Automation of software test, May 23-23, 2006, Shanghai, China
Xiang-Yun Wang , Wenhui Zhang , Yong-Chao Li , Kai-Yuan Cai, A polynomial dynamic system approach to software design for attractivity requirement, Information Sciences: an International Journal, v.177 n.13, p.2712-2725, July, 2007 | optimal control;discrete event systems;signal;sigali;polynomial dynamical system;power plant;supervisory control problem |
631250 | Quantitative Analysis of Faults and Failures in a Complex Software System. | AbstractThe dearth of published empirical data on major industrial systems has been one of the reasons that software engineering has failed to establish a proper scientific basis. In this paper, we hope to provide a small contribution to the body of empirical knowledge. We describe a number of results from a quantitative study of faults and failures in two releases of a major commercial system. We tested a range of basic software engineering hypotheses relating to: The Pareto principle of distribution of faults and failures; the use of early fault data to predict later fault and failure data; metrics for fault prediction; and benchmarking fault data. For example, we found strong evidence that a small number of modules contain most of the faults discovered in prerelease testing and that a very small number of modules contain most of the faults discovered in operation. However, in neither case is this explained by the size or complexity of the modules. We found no evidence to support previous claims relating module size to fault density nor did we find evidence that popular complexity metrics are good predictors of either fault-prone or failure-prone modules. We confirmed that the number of faults discovered in prerelease testing is an order of magnitude greater than the number discovered in 12 months of operational use. We also discovered fairly stable numbers of faults discovered at corresponding testing phases. Our most surprising and important result was strong evidence of a counter-intuitive relationship between pre- and postrelease faults: Those modules which are the most fault-prone prerelease are among the least fault-prone postrelease, while conversely, the modules which are most fault-prone postrelease are among the least fault-prone prerelease. This observation has serious ramifications for the commonly used fault density measure. Not only is it misleading to use it as a surrogate quality measure, but, its previous extensive use in metrics studies is shown to be flawed. Our results provide data-points in building up an empirical picture of the software development process. However, even the strong results we have observed are not generally valid as software engineering laws because they fail to take account of basic explanatory data, notably testing effort and operational usage. After all, a module which has not been tested or used will reveal no faults, irrespective of its size, complexity, or any other factor. | Introduction
Despite some heroic efforts from a small number of research centres and individuals (see, for
example [Carman et al 1995], [Kaaniche and Kanoun 1996], [Khoshgoftaar et al 1996],
[Ohlsson N and Alberg 1996], [Shen et al 1985]) there continues to be a dearth of published
empirical data relating to the quality and reliability of realistic commercial software systems.
Two of the best and most important studies [Adams 1984] and [Basili and Perricone 1984]
are now over 12 years old. Adams' study revealed that a great proportion of latent software
faults lead to very rare failures in practice, while the vast majority of observed failures are
caused by a tiny proportion of the latent faults. Adams observed a remarkably similar
distribution of such fault 'sizes' across nine different major commercial systems. One
conclusion of the Adams' study is that removing large numbers of faults may have a
negligible effect on reliability; only when the small proportion of 'large' faults are removed
will reliability improve significantly. Basili and Pericone looked at a number of factors
influencing the fault and failure proneness of modules. One of their most notable results was
that larger modules tended to have a lower fault density than smaller ones. Fault density is
the number of faults discovered (during some pre-defined phase of testing or operation)
divided by a measure of module size (normally KLOC). While the fault density measure has
numerous weaknesses as a quality measure (see [Fenton and Pfleeger 1996] for an in-depth
discussion of these) this result is nevertheless very surprising. It appears to contradict the
very basic hypotheses that underpin the notions of structured and modular programming.
Curiously, the same result has been rediscovered in other systems by [Moeller and Paulish
1995]. Recently Hatton provided an extensive review of similar empirical studies and came
to the conclusion:
'Compelling empirical evidence from disparate sources implies that in any software
system, larger components are proportionally more reliable than smaller components'
[Hatton 1997].
Thus the various empirical studies have thrown up results which are counter-intuitive to very
basic and popular software engineering beliefs. Such studies should have been a warning to
the software engineering research community about the importance of establishing a wide
empirical basis. Yet these warnings were clearly not heeded. In [Fenton et al 1994] we
commented on the almost total absence of empirical research on evaluating the effectiveness
of different software development and testing methods. There also continues to be an almost
total absence of published benchmarking data.
In this paper we hope to provide a small contribution to the body of empirical knowledge by
describing a number of results from a quantitative study of faults and failures in two releases
of a major commercial system. In Section 2 we describe the background to the study and the
basic data that was collected. In Section 3 we provide pieces of evidence that one day (if a
reasonable number of similar studies are published) may help us test some of the most basic
of software engineering hypotheses. In particular we present a range of results and examine
the extent to which they provide evidence for or against following hypotheses:
. Hypotheses relating to the Pareto principle of distribution of faults and failures
1a) a small number of modules contain most of the faults discovered during pre-release
1b) if a small number of modules contain most of the faults discovered during pre-release
testing then this is simply because those modules constitute most of the
code size
2a) a small number of modules contain the faults that cause most failures
2b) if a small number of modules contain most of the operational faults then this is
simply because those modules constitute most of the code size.
. Hypotheses relating to the use of early fault data to predict later fault and failure data (at
the module level):
higher incidence of faults in function testing implies a higher incidence of faults
in system testing
higher incidence of faults in pre-release testing implies higher incidence of
failures in operation.
We tested each of these hypotheses from an absolute and normalised fault perspective.
. Hypotheses about metrics for fault prediction
(such as LOC) are good predictors of fault and failure prone
modules.
Complexity metrics are better predictors than simple size metrics of fault and
failure-prone modules
. Hypotheses relating to benchmarking figures for quality in terms of defect densities
Fault densities at corresponding phases of testing and operation remain roughly
constant between subsequent major releases of a software system
systems produced in similar environments have broadly similar fault
densities at similar testing and operational phases.
For the particular system studied we provide very strong evidence for and against some of
the above hypotheses and also explain how some previous studies that have looked at these
hypotheses are flawed. Hypotheses 1a and 2a are strongly supported, while 1b and 2b are
strongly rejected. Hypothesis 3 is weakly supported, while curiously hypothesis 4 is strongly
rejected. Hypothesis 5 is partly supported, but hypotheses 6 is weakly rejected for the
popular complexity metrics. However, certain complexity metrics which can be extracted
from early design specifications are shown to be reasonable fault predictors. Hypothesis 7 is
partly supported, while 8 can only be tested properly once other organisations publish
analogous results.
We discuss the results in more depth in Section 4.
2 The basic data
The data presented in this paper is based on two major consecutive releases of a large legacy
project developing telecommunication switching systems. We refer to the earlier of the
releases as release n, and the later release as release n+1. For this study 140 and 246
modules respectively from release n and n+1 were selected randomly for analysis from the
set of modules that were either new or had been modified. The modules ranged in size from
approximately 1000 to 6000 LOC (as shown in Table 1). Both releases were approximately
the same total system size.
Table
1. Distribution of modules by size.LOC Release n Release n+1
<1000 23 26
1001-2000 58 85
2001-3000 37 73
3001-4000 15 38
Total 140 246
2.1 Dependent variable
The dependent variable in this study was number of faults. Faults are traced to unique
modules. The fault data were collected from four different phases:
. function test
. system test (ST)
. first 26 weeks at a number of site tests (SI)
. first year (approx) operation (OP)
Therefore, for each module we have four corresponding instances of the dependent variable.
The testing process and environment used in this project is well established within the
company. It has been developed, maintained, taught and applied for a number of years. A
team separated from the design and implementation organisation develop the test cases based
on early function specifications.
Throughout the paper we will refer to the combination of FT and ST faults collectively as
testing faults. We will refer to the combination of SI and OP faults collectively as
operational faults. We shall also refer at times to failures. Formally, a failure is an observed
deviation of the operational system behaviour from specified or expected behaviour. All
failures are traced back to a unique (operational) fault in a module. Observation of distinct
failures that are traced to the same fault are not counted separately. This means, for example,
that if 20 OP faults are recorded against module x, then these 20 unique faults caused the set
of all failures observed (and which are traced back to faults in module x) during the first year
of operation.
The Company classified each fault found at any phase according to the following:
a) the fault had already been corrected;
b) the fault will be corrected;
c) the fault requires no action (i.e. not treated as a fault);
d) the fault was due to installation problems.
In this paper we have only considered faults classified as b. Internal investigations have
shown that the documentation of faults and their classification according to the above
categories is reliable. A summary of the number of faults discovered in each testing phase for
each system release is shown in Table 2.
pre-release faults post-release faults
Release Function test System test Site test Operation
(sample size 140
modules)
n+1 (sample size 246
modules)
Table
2. Distribution of faults per testing phase
2.2 Independent variables
Various metrics were collected for each module. These included:
. Lines of code (LOC) as the main size measure
. McCabe's cyclomatic complexity.
. Various metrics based on communication (modelled with signals) between modules and
within a module During the specification phase, the number of new and modified signals
are similar to messages) for each module were specified. Most notably, the metric
SigFF is the count of the number of new and modified signals. This metric was also used
as a measure of interphase complexity. [Ohlsson and Alberg, 1996] provides full details
of these metrics and their computation.
The complexity metrics were collected automatically from the actual design documents using
a tool, ERIMET [Ohlsson, 1993]. This automation was possible as each module was
designed using FCTOOL, a tool for the formal description language FDL which is related to
SDL's process diagrams [Turner, 1993]. The metrics are extracted direct from the FDL-
graphs. The fact that the metrics were computed from artefacts available at the design stage,
is an important point. It has often been asserted that computing metrics from design
documents is far more valuable than metrics from source code [Heitkoetter et al 1990].
However, there have been very few published attempts to do so. [Kitchenham et al, 1990]
reported on using design metrics, based on Henry and Kafura's information and flow metrics
[1981 and 1984], for outlier analysis. [Khoshgoftaar et al, 1996] used a subset of metrics that
"could be collected from design documentation", but the metrics were extracted from the
code.
Numerous studies, such as [Ebert and Liedtke, 1995]; and [Munson and Khoshgoftaar, 1992]
have reported using metrics extracted from source code, but few have reported promising
prediction results based on design metrics.
3 The hypotheses tested and results
Since the data were collected and analysed retrospectively there was no possibility of setting
up any controlled experiments. However, the sheer extent and quality of the data was such
that we could use it to test a number of popular software engineering hypotheses relating to
the distribution and prediction of faults and failures. In this section we group the hypotheses
into four categories. In Section 3.1 we look at hypotheses relating to the Pareto principle of
distribution of faults and failures. It is widely believed, for example, that a small number of
modules in any system are likely to contain the majority of the total system faults. This is
often referred to as the '20-80 rule' in the sense that 80% of the faults are contained in 20%
of the modules. We show that there is strong evidence to support the two most commonly
cited Pareto principles.
The assumption of the Pareto principle for faults has led many practitioners to seek methods
for predicting the fault-prone modules at the earliest possible development and testing
phases. These methods seem to fall into two categories:
1. use of early fault data to predict later fault and failure data;
2. use of product metrics to predict fault and failure data
Given our evidence to support the Pareto principle we therefore test a number of hypotheses
which relate to these methods of early prediction of fault-prone modules. In Section 3.2, we
test hypotheses concerned with while in Section 3.3 we test hypotheses concerned
with 2).
Finally, in Section 3.4 we test some hypotheses relating to benchmarking fault data, and at
the same time provide data that, can themselves, be valuable in future benchmarking studies.
3.1 Hypotheses relating to the Pareto principle of distribution of faults and
failures
The main part of the total cost of quality deficiency is often found to be caused by very few
faults or fault types [Bergman and Klefsjo 1991]. The Pareto principle [Juran 1964], also
called the 20-80 rule, summarises this notion. The Pareto principle is used to concentrate
efforts on the vital few, instead of the trivial many. There are a number of examples of the
Pareto principle in software engineering. Some of these have gained widespread acceptance,
such as the notion that in any given software system most faults lie in a small proportion of
the software modules. Adams [1984] demonstrated that a small number of faults were
responsible for a large number of failures. [Munson et al 1992] motivated their
discriminative analysis by referring to the 20-80 rule, even though their data demonstrated a
rule. [Zuse 1991] used Pareto techniques to identify the most common types of faults
found during function testing. Finally, [Schulmeyer and MacManus 1987] described how the
principle supports defect identification, inspection and applied statistical techniques.
We investigated four related Pareto hypotheses:
Hypothesis 1a: a small number of modules contain most of the faults discovered during pre-release
testing (phases FT and ST);
Hypothesis 1b: if a small number of modules contain most of the faults discovered during
pre-release testing then this is simply because those modules constitute most of the code
size.
Hypothesis 2a: a small number of modules contain most of the operational faults (meaning
failures as we have defined them above observed in phases SI and OP);
Hypothesis 2b: if a small number of modules contain most of the operational faults then this
is simply because those modules constitute most of the code size.
We now examine each of these in turn.
3.1.1 Hypothesis 1a: a small number of modules contain most of the faults
discovered during testing (phases FT and ST)
Figure
1 illustrates that 20% of the modules were responsible for nearly 60% of the faults
found in testing for release n. An almost identical result was obtained for release n+1 but is
not shown here. This is also almost identical to the result in earlier work where the faults
from both testing and operation were considered [Ohlsson et al 1996]. This, together with
other results such as [Munson et al 1992], provides very strong support for hypothesis 1a),
and even suggests a specific Pareto distribution in the area of 20-60. This 20-60 finding is
not as strong as the one observed by [Compton and Withrow, 1990] (they found that 12% of
the modules, referred to as packages, accounted for 75% of all the faults during system
integration and test), but is nevertheless important.2060100
%of Modules
%of Faults
Figure
1: Pareto diagram showing percentage of modules versus percentage of faults
for Release n
3.1.2 Hypothesis 1b: if a small number of modules contain most of the faults
discovered during pre-release testing then this is simply because those modules
constitute most of the code size.
Since we found strong support for hypothesis 1a, it makes sense to test hypothesis 1b. It is
popularly believed that hypothesis 1a is easily explained away by the fact that the small
proportion of modules causing all the faults actually constitute most of the system size. For
example, [Compton and Withdraw, 1990] found that the 12% of modules accounting for
75% of the faults accounted for 63% of the LOC. In our study we found no evidence to
support hypotheses 1b. For release n, the 20% of the modules which account for 60% of the
faults (discussed in hypothesis 1a) actually make up just 30% of the system size. The result
for release n+1 was almost identical.
3.1.3 Hypothesis 2a: a small number of modules contain most of the
operational faults (meaning failures as we have defined them above, namely phases CU
and OP)
We discovered not just support for a Pareto distribution, but a much more exaggerated one
than for hypothesis 1a. Figure 2 illustrates this Pareto effect in release n. Here 10% of the
modules were responsible for 100% of the failures found. The result for release n+1 is not so
remarkable but is nevertheless still quite striking: 10% of the modules were responsible for
80% of the failures.2060100
% of Failures
% of Modules
Figure
2: Pareto diagram showing percentage of modules versus percentage of failures
for Release n
3.1.4 Hypothesis 2b: if a small number of modules contain most of the
operational faults then this is simply because those modules constitute most of the code
size.
As with hypothesis 1a, it is popularly believed that hypothesis 2a is easily explained away by
the fact that the small proportion of modules causing all the failures actually constitute most
of the system size. In fact, not only did we find no evidence for hypothesis 2, but we
discovered very strong evidence in favour of a converse hypothesis:
most operational faults are caused by faults in a small proportion of the code
For release n, 100% of the operational faults are contained in modules that make up just 12%
of the entire system size. For release n+1 60% of the operational faults were contained in
modules that make up just 6% of the entire system size, while 78% of the operational faults
were contained in modules that make up 10% of the entire system size.
3.2 Hypotheses relating to the use of early fault data to predict later fault
and failure data
Given the likelihood of hypotheses 1a and 2a there is a strong case for trying to predict the
most fault-prone modules as early as possible during development. In this and the next
subsection we test hypotheses relating to methods of doing precisely that. First we look at
the use of fault data collected early as a means of predicting subsequent faults and failures.
Specifically we test the hypotheses:
Hypothesis 3: Higher incidence of faults in function testing (FT) implies higher incidence of
faults in system testing (ST)
Hypothesis 4: Higher incidence of faults in all pre-release testing (FT and ST) implies
higher incidence of faults in post-release operation (SI and OP).
We tested each of these hypotheses from an absolute and normalised fault perspective. We
now examine the results.
3.2.1 Hypothesis 3: Higher incidence of faults in function testing (FT) implies
higher incidence of faults in system testing (ST)
The results associated with this hypothesis are not very strong. In release n (see Figure 3),
50% of the faults in system test occurred in modules which were responsible for 37% of the
faults in function test.
0%
20%
40%
80%
100%
15% 30% 45% 60% 75% 90%
FT
% of Modules
% of Accumalated
Faults in ST
Figure
3: Accumulated percentage of the absolute number faults in system test when
modules are ordered with respect to the number of faults in system test and function
test for release n.
From a prediction perspective the figures indicate that the most fault-prone modules during
function test will, to some extent, also be fault-prone in system test. However, 10% of the
most fault-prone modules in system test are responsible for 38% of the faults in system test,
but 10% of the most fault-prone modules in function test is only responsible for 17% of the
faults in system test. This is persistent up to 75% of the modules. This means that nearly
20% of the faults in system test need to be explained in another way. The same pattern was
found when using normalised data (faults/LOC) instead of absolute, even though the
percentages were general lower and the prediction a bit poorer.
The results were only slightly different for release n+1, where we found:
. 50% of the faults in system test occurred in modules which were responsible for 25% of
the faults in function test
. 10% of the most fault-prone modules in system test are responsible for 46% of the faults
in system test, but 10% of the most fault-prone modules in function test is only
responsible for 24% of the faults in system test.
These results and also when using normalised data instead of absolute are very similar to the
result in release n.
3.2.2 Hypothesis 4: Higher incidence of faults in all pre-release testing (FT
and ST) implies higher incidence of faults in post-release operation (SI and OP).
The rationale behind hypothesis 4 is that the relatively small proportion of modules in a
system that account for most of the faults are likely to be fault-prone both pre- and post
release. Such modules are somehow intrinsically complex, or generally poorly built. 'If you
want to find where the faults lie, look where you found them in the past' is a very common
and popular maxim. For example, [Compton and Withrow, 1990] have found as much as six
times greater post delivery defect density when analysing modules with faults discovered
prior to delivery.
In many respects the results in our study relating to this hypothesis are the most remarkable
of all. Not only is there no evidence to support the hypothesis, but again there is strong
evidence to support a converse hypothesis. In both release n and release n+1 almost all of the
faults discovered in pre-release testing appear in modules which subsequently reveal almost
no operation faults. Specifically, we found:
. In release n (see Figure 4), 93% of faults in pre-release testing occur in modules which
have NO subsequent operational faults (of which there were 75 in total). Thus 100% of
the 75 failures in operation occur in modules which account for just 7% of the faults
discovered in pre-release testing.
Post-release faults261014
Pre-release faults
Figure
4: Scatter plot of pre-release faults against post-release faults for version n (each
dot represents a module)
. In release n+1 we observed a much greater number of operational faults, but a similar
phenomenon to that of release n (see Figure 5). Some 77% of pre-release faults occur in
modules which have NO post-release faults. Thus 100% of the 366 failures in operation
occur in modules which account for just 23% of the faults discovered in function and
system test.
These remarkable results are also exciting because they are closely related to the Adams'
phenomenon. The results have major ramifications for one of the most commonly used
software measures, fault density. Specifically it appears that modules with high fault density
pre-release are likely to have low fault-density post-release, and vice versa. We discuss the
implications at length in Section 4.
3.3 Hypotheses about metrics for fault prediction
In the previous subsection we were concerned with using early fault counts to predict
subsequent fault prone modules. In the absence of early fault data, it has been widely
proposed that software metrics (which can be automatically computed from module designs
or code) can be used to predict fault prone modules. In fact this is widely considered to be
the major benefit of such metrics [Fenton and Pfleeger 1997]. We therefore attempted to test
the basic hypotheses which underpin these assumptions. Specifically we tested:
Hypothesis 5: Size metrics (such as LOC) are good predictors of fault and failure prone
modules.
Hypothesis Complexity metrics are better predictors than simple size metrics, especially at
predicting fault-prone modules
3.3.1 Hypothesis 5: Size metrics (such as LOC) are good predictors of fault
and failure prone modules.
Strictly speaking, we have to test several different, but closely, related hypotheses:
Hypothesis 5a: Smaller modules are less likely to be failure prone than larger ones
Hypothesis 5b Size metrics (such as LOC) are good predictors of number of pre-release
faults in a module
Hypothesis 5c: Size metrics (such as LOC) are good predictors of number of post-release
faults in a module
Post-release faults5152535
Pre-release faults
Figure
5: Scatter plot of pre-release faults against post-release faults for version
(each dot represents a module)
Hypothesis 5d: Size metrics (such as LOC) are good predictors of a module's (pre-release)
fault-density
Hypothesis 5e: Size metrics (such as LOC) are good predictors of a module's (post-release)
fault-density
Hypothesis 5a underpins, in many respects, the principles behind most modern programming
methods, such as modular, structured, and objected oriented. The general idea has been that
smaller modules should be easier to develop, test, and maintain, thereby leading to fewer
operational faults in them. On the other hand, it is also accepted that if modules are made too
small then all the complexity is pushed into the interface/communication mechanisms. Size
guidelines for decomposing a system into modules are therefore desirable for most
organisations.
It turns out that the small number of relevant empirical studies have produced counter-intuitive
results about the relationship between size and (operational) fault density. Basili and
Pericone [1984] reported that fault density appeared to decrease with module size. Their
explanation to this was the large number of interface faults spread equally across all
modules. The relatively high proportion of small modules were also offered as an
explanation. Other authors, such as [Moeller and Paulish 1995] who observed a similar trend,
suggested that larger modules tended to be under better configuration management than
smaller ones which tended to be produced 'on the fly'. In fact our study did not reveal any
similar trend, and we believe the strong results of the previous studies may be due to
inappropriate analyses.
We begin our results with a replication of the key part of the [Basili and Pericone 1984]
study.
Table
3 (which compare with Basili and Perricone's Table III) shows the number of
modules that had a certain number of faults. The table also displays the figures for the
different types of modules and the percentages. The data set analysed in this paper has, in
comparison with [Basili and Pericone 1984] a lower proportion of modules with few faults
and the proportion of new modules is lower. In subsequent analysis all new modules have
been excluded. The modules are also generally larger than those in [Basili and Pericone
1984], but we do not believe this introduces any bias.
The scatter plots Figure 6, for lines of code versus the number of pre- and post-release faults
does not reveal any strong evidence of trends for release n+1. Neither could any strong
trends be observed when line of code versus the total number of faults were graphed, Figure
7. The results for release n were reasonably similar.20601001400 2000 4000 6000 8000 10000
Post-release
Faults Faults
Lines of code Lines of code
Figure
Scatterplots of LOC against pre- and post-release faults forrelease n+1 (each dot represents a module).Release n Release n+1
Fault Mod New
Percent
modified modules Mod New Splitted
Percent
modified modules
11 to 15
21 to
26 to
31 to
36 to 40
Table
3. Number of Modules Affected by a fault for Release n (140 modules,
1815 Faults) and Release n+1 (246 modules, 3795 faults ).
Faults
Lines of code
Figure
7: Scatterplots of LOC against all faults for release n+1 (each dotrepresents a module).When Basili and Pericone could not see any trend they calculated the number of faults per
1000 executable lines of code. Table 4 (which compares with table VII in [Basili and
Pericone 1984]) shows these results for our study.
Release n Release n+1
Module size Frequency Faults/1000 Lines Frequency Faults/1000 Lines
1000 15 4.77 17 6
2000
3000 22 5.74 37 5
Table
4. Faults/1000 Lines of code release n and n+1.
Superficially, the results in table 4 for release n+1 appear to support the Basili and Pericone
finding. In release n+1 it is clear that the smallest modules have the highest fault density.
However, the fault density is very similar for the other groups. For release n the result is the
opposite of what was reported by Basili and Perricone. The approach to grouping data as
done in [Basili and Perricone 1984] is highly misleading. What Basili and Pericone failed to
show was a simple plot of fault density against module size, as we have done in Figure 9 for
release n+1. Even though the grouped data for this release appeared to support the Basili and
Pericone findings, this graph shows only a very high variation for the small modules and no
evidence that module size has a significant impact on fault-density. Clearly other explanatory
factors, such as design, inspection and testing effort per module, will be more important.
Lines of code
Figure
8: Scatter plot of module fault density against size for release n+1
The scatter plots assumes that the data belong to an interval or ratio scale. From a prediction
perspective it is not always necessary. In fact, a number of studies are built on the Pareto
principle, which often only require that we have ordinal data. In the tests of hypothesis above
we have used a technique that is based on ordinal data, called Alberg diagrams [Ohlsson and
Alberg 1996], to evaluate the independent variables' ability to rank the dependent variable.
The LOC ranking ability is assessed in Figure 9. The diagram reveals that, even though
previous analysis did not indicate any predictability, LOC is quite good at ranking the most
fault-prone modules, and for the most fault prone-modules (the 20 percent) much better than
any previous ones.
0%
20%
40%
80%
100%
10% 20% 40% 60% 80% 100%
all faults
% of Accumalated
Faults
% of Modules
Figure
9. Accumulated percentage of the absolute number of all faults when modules
are ordered with respect to LOC for release n+1.
3.3.2 Hypothesis Complexity metrics are better predictors than simple size
metrics of fault and failure-prone modules
'Complexity metrics' is the rather misleading term used to describe a class of measures that
can be extracted directly from source code (or some structural model of it, like a flowgraph
representation). Occasionally (and more beneficially) complexity metrics can be extracted
before code is produced, such as when the detailed designs are represented in a graphical
language like SDL (as was the case for the system in this study). The archetypal complexity
metric is McCabe's cyclomatic number [McCabe, 1976], but there have in fact been many
dozens that have been published [Zuse 1991]. The details, and also the limitations of
complexity metrics, have been extensively documented (see [Fenton and Pfleeger 1996]) and
we do not wish to re-visit those issues here. What we are concerned with here is the
underlying assumption that complexity metrics are useful because they are (easy to extract)
indicators of where the faults lie in a system. For example, Munson and Khosghoftaar
asserted:
'There is a clear intuitive basis for believing that complex programs have more faults in
them than simple programs', [Munson and Khosghoftaar, 1992]
An implicit assumption is that complexity metrics are better than simple size measures in this
respect (for if not there is little motivation to use them). We have already seen, in section
3.3.1, that size is a reasonable predictor of number of faults (although not of fault density).
We now investigate the case of complexity metrics such as the cyclomatic number.
We demonstrated in testing the last hypothesis the problem with comparing average figures
for different size intervals. Instead of replicating the relevant analysis in [Basili and Pericone
1984] by calculating the average cyclomatic number for each module size class, and than
plotting the results we just generated scatter plots and Alberg diagrams.
When the cyclomatic complexity and the pre- and post-release faults were graphed for
release n+1 (Figure 10) we observed a number of interesting trends. The most complex
modules appear to be more fault-prone in pre-release, but appear to have nearly no faults in
post-release. The most fault-prone modules in post-release appear to be the less complex
modules. This could be explained by how test effort is distributed over the modules: modules
that appear to be complex are treated with extra care than simpler ones. Analysing in
retrospect the earlier graphs for size versus faults reveal a similar pattern.
The scatter plot for the cyclomatic complexity and the total number of faults (Figure 11)
shows again some small indication of correlation. The Alberg diagrams were similar as when
size was used.20601001400 1000 2000 3000
Faults
Cyclomatic complexity
Figure
11: Scatterplot of cyclomatic complexity against all faults for release n+1 (each
dot represents a module).
To explore the relations further the scatter plots were also graphed with normalised data
Figure
12). The result showed even more clearly that the most-fault prone modules in pre-release
have nearly no post-release faults.20601001400 1000 2000 3000
Post-release
Faults Faults
Cyclomatic complexity Cyclomatic complexity
Figure
10: Scatterplots of cyclomatic complexity against number of pre-and
post-release faults for release n+1 (each dot represents a module).
In order to determine whether or not large modules were less dense or complex than smaller
modules [Basili and Perricone, 1984] plotted the cyclomatic complexity versus module size.
Following the same pattern in earlier analysis they failed to see any trends, and therefore
they analysed the relation by grouping modules according to size. As illustrated above this
can be very misleading. Instead we graphed scatter plots of the relation and calculated the
correlation (Figure 13).
The relation may not be linear. However, there is a good linear correlation between
cyclomatic complexity and LOC 2 .
Earlier studies [Ohlsson and Alberg, 1996] have suggested that other design metrics could be
used in combination or on their own to explain fault-proneness. Therefore, we did the same
analysis using the SigFF measure instead of cyclomatic complexity.
0,000
Pre-release
0,000
0,004
0,006
Post-release
Cyclomatic complexity
Cyclomatic complexity
Figure
12: Scatterplots of cyclomatic complexity against fault density (pre-and
post-release) for release n+1 (each dot represents a module).10003000500070009000
Cyclomatic complexity
Figure
13: Complexity versus Module Size
The scatterplots using absolute numbers (Figure 14), or normalised data did not indicate any
new trends. In earlier work the product of cyclomatic complexity and SigFF was shown to
be a good predictor of fault-proneness. To evaluate CC*SigFF predictability the Alberg
diagram was graphed (Figure 15). The combined metrics appear to be better than both SigFF
and Cyclomatic Complexity on there own, and also better than the size metric.
0%
20%
40%
80%
100%
20% 40% 60%
10% 80% 100%
All faults
Dec
% of Accumulated
Faults
% of Modules
Figure
15. Accumulated percentage of the absolute number of all faults when modules
are ordered with respect to LOC for release n+1.
The above results do not paint a very glowing report of the usefulness of complexity metrics,
but it can be argued that 'being a good predictor of fault density' is not an appropriate
validation criteria for complexity metrics. This is discussed in section 4. Nevertheless there
are some positive aspects. The combined metric CC*SigFF is again shown to be a reasonable
predictor of fault-prone modules. Also, measures like SigFF are, unlike LOC, available at a
very early stage in the software development. The fact that it correlates so closely with the
final LOC, and is a good predictor of total number of faults, is a major benefit.
3.4 Hypotheses relating to benchmarking
One of the major benefits of collecting and publicising the kind of data discussed in this
paper is to enable both intra- and inter-company comparisons. Despite the incredibly vast20601001400 100 200 300 400
Post-release
Faults Faults
Interphase complexity Interphase complexity
Figure
14: Scatterplots of SigFF against number of pre-and post-release
faults for release n+1 (each dot represents a module).
volumes of software in operation throughout the world there is no consensus about what
constitutes, for example, a good, bad, or average fault density under certain fixed conditions
of measurement. It does not seem unreasonable to assume that such information might be
known, for example, for commercial C programs where faults are defined as operational
faults (in the sense of this paper) during the first 12 months of use by a typical user.
Although individual companies may know this kind of data for their own systems, almost
nothing has ever been published. The 'grey' literature (as referenced, for example, in
[Pfleeger and Hatton 1997] seems to suggest some crude (but unsubstantiated guidelines)
such as the following for fault density in first 12 months of typical operational use:
. less that 1 fault per KLOC is very good (and typically only achieved by companies using
state-of-the-art development and testing methods)
. between 4 to 8 faults per KLOC is typical
. greater than 12 faults per KLOC is bad
When pre-release faults only are considered there is some notion that 10-30 faults per KLOC
is typical for function, system and integration testing combined. For reasons discussed
already high values of pre-release fault density is not indicative of poor quality (and may in
fact suggest the opposite). Therefore it would be churlish to talk in terms of 'good' and `bad'
densities because, as we have already stressed, these figures may be explained by key
factors such as the effort spent on testing.
In this study we can consider the following hypothesis
Hypothesis 7: Fault densities at corresponding phases of testing and operation remain
roughly constant between subsequent major releases of a software system
since we have data on successive releases. The results we present, being based only on one
system, represents just a single data-point, but nevertheless we believe it may also be
valuable for other researchers.
In a similar vein we consider:
Hypothesis 8: software systems produced in similar environments have broadly similar fault
densities at similar testing and operational phases.
Really we are hoping to build up an idea of the range of fault densities that can reasonably
be expected. We compare our results with some other published data.
3.4.1 Fault densities at corresponding phases of testing and operation remain
roughly constant between subsequent major releases of a software system
FT ST SI OP
Rel n 3.49 2.60 0.07 0.20
Rel n+1 4.15 1.82 0.43 0.20
Table
5: Fault densities at the four phases of testing and operation
As table 5 shows, there is some support for the hypothesis that the fault-density remains
roughly the same between subsequent releases. The only exceptional phase is SI. As well as
providing some support for the hypothesis the result suggests that the development process is
stable and repeatable with respect to the fault-density. This has interesting implications for
the software process improvement movement, as epitomised by the Capability Maturity
Model CMM.
A general assumption of CMM is that a stable and repeatable process is a necessary pre-requisite
for continuous process improvement. For an immature organisation (below level
it is assumed to take many years to reach such a level. In CMM's terminology companies do
not have the kind of stable and repeatable process indicated in the above figures until they
are at level 3. Yet, like almost every software producing organisation in the world, the
organisation in this case study project is not at level 3. The results reflects a stability and
repeatability that according to CMM should not be the case. At such we question the CMM's
underlying assumption about what constitutes an organisation that should have a stable and
repeatable process.
3.4.2 Software systems produced in similar environments have broadly similar
fault densities at similar testing and operational phases.
To test this hypothesis we compared the results of this case study with other published data.
For simplicity we restricted our analysis to the two distinct phases: 1) pre-release fault
density; and 2) post-release fault density. First, we can compare the two results of the two
separate releases in the cases study (Table 6).
Pre-release Post-release All
Rel n 6.09 0.27 6.36
Rel n+1 5.97 0.63 6.60
Table
densities pre-and post-release for the case study system
The overall fault densities are similar to those reported for a range of systems in [Hatton
1995], while [Agresti and Evanco, 1992] reported similar ball-park figures in a study of Ada
programs, 3.0 to 5.5 faults/KLOC. The post-release fault densities seem to be roughly in line
of those reported studies of best practice.
More interesting is the difference between the pre- and post-release fault densities. In both
versions the pre-release fault density is an order of magnitude higher than the post-release
density.
Of the few published studies that reveal the difference between pre- and post-release fault
density, [Pfleeger and Hatton, 1997] also report 10 times as many faults in pre-release
(although the overall fault density is lower. [Kitchenham et al 1986] reports a higher ratio of
pre-release to post-release. Their study was an investigation into the impact of inspections;
combining the inspected and non-inspected code together reveals a pre-release fault density
of approx per KLOC and a post-release fault density of approximately 0.3 per KLOC.
However, it is likely that the operational time here was not as long.
Thus, from the small amount of evidence we conclude that there appears to be 10-30 times as
many faults pre-release as post release.
4 Discussion and conclusions
Apart from the usual quality control angle, a very important perceived benefit of collecting
fault data at different testing phases is to be able to move toward statistical process control
for software development. For example, this is the basis for the software factory approach
proposed by Japanese companies such as Hitachi [Yasuda and Koga 1995] in which they
build fault profiles that enable them to claim accurate fault and failure prediction. Another
important motivation for collecting the various fault data is to enable us to evaluate the
effectiveness of different testing strategies. In this paper we have used an extensive example
of fault and failure data to test a range of popular software engineering hypotheses. The
results we have presented come from just two releases of a major system developed by a
single organisation. It may therefore be tempting for observers to dismiss their relevance for
the broader software engineering community. Such an attitude would be dangerous given the
rigour and extensiveness of the data-collection, and also the strength of some of the
observations.
The evidence we found in support of the two Pareto principles 1a) and 2a) is the least
surprising. It does seem to be inevitable that a small number of the modules in a system will
contain a large proportion of the pre-release faults and that a small proportion of the modules
will contain a large proportion of the post-release faults. However, the popularly believed
explanations for these two phenomena appear to be quite wrong:
. It is not the case that size explains in any significant way the number of faults. Many
people seem to believe (hypotheses 1b and 2b) that the reason why a small proportion of
modules account for most faults is simply because those fault-prone modules are
disproportionately large and therefore account for most of the system size. We have
shown this assumption to be false for this system.
. Nor is it the case that 'complexity' (or at least complexity as measured by 'complexity
metrics') explains the fault-prone behaviour (hypothesis 6). In fact complexity is not
significantly better at predicting fault and failure prone modules than simple size
measures.
. It is also not the case that the set of modules which are especially fault-prone pre-release
are going to be roughly the same set of modules that are especially fault-prone post-release
(hypothesis 4). Yet this view seems to be widely accepted, partly on the
assumption that certain modules are 'intrinsically' difficult and will be so throughout their
testing and operational life.
Our strong rejection of hypothesis 4 is a very important observation. Many believe that the
first place to look for modules likely to be fault-prone in operation is in those modules
which were fault prone during testing. In fact our results relating to hypothesis 4 suggest
exactly the opposite testing strategy as the most effective. If you want to find the modules
likely to be fault-prone in operation then you should ignore all the modules which were
fault-prone in testing! In reality, the danger here is in assuming that the given data provides
evidence of a causal relationship. The data we observed can be explained by the fact that the
modules in which few faults are discovered during testing may simply not have been tested
properly. Those modules which reveal large numbers of faults during testing may genuinely
be very well tested in the sense that all the faults really are 'tested out of them'. The key
missing explanatory data in this case is, of course, testing effort.
The results of hypothesis 4 also bring into question the entire rationale for the way software
complexity metrics are used and validated. The ultimate aim of complexity metrics is to
predict modules which are fault-prone post-release. Yet we have found that there is no
relationship between the modules which are fault-prone pre-release and the modules which
are fault-prone post-release. Most previous 'validation' studies of complexity metrics have
deemed a metric 'valid' if it correlates with the (pre-release) fault density. Our results
suggest that 'valid' metrics may therefore be inherently poor at predicting what they are
supposed to predict. The results of hypothesis 4 also highlight the dangers of using fault
density as a de-facto measure of user perceived software quality. If fault density is measured
in terms of pre-release faults (as is very common), then at the module level this measure tells
us worse than nothing about the quality of the module; a high value is more likely to be an
indicator of extensive testing than of poor quality. Our analysis of the value of 'complexity'
metrics is mixed. We confirmed some previous studies' results that popular complexity
metrics are closely correlated to size metrics like LOC. While LOC (and hence also the
complexity metrics) are reasonable predictors of absolute number of faults, they are very
poor predictors of fault density (which is what we are really after). However, some
complexity metrics like SigFF are, unlike LOC, available at a very early stage in the
software development process. The fact that it correlates so closely with the final LOC, is
therefore very useful. Moreover, we argued [Fenton and Pfleeger 1996], that being a good
predictor of fault-proneness may not be the most appropriate test of 'validity' of a
complexity metric. It is more reasonable to expect complexity metrics to be good predictors
of module attributes such as comprehensibility or maintainability.
We investigated the extent to which benchmarking type data could provide insights into
software quality. In testing hypotheses 7 and 8, we showed that the fault densities are
roughly constant between subsequent major releases and our data indicates that there are 10-
times as many pre-release faults as post-release faults. Even if readers are uninterested in
the software engineering hypotheses (1-6) they will surely value the publication of these
figures for future comparisons and benchmarking.
We believe that there are no 'software engineering laws' as such, because it is always possible
to construct a system in an environment which contradicts the law. For example, the studies
summarised in [Hatton 1997] suggest that larger modules have a lower fault density than
smaller ones. Apart from the fact that we found no clear evidence of this ourselves
(hypothesis 5) and also found weaknesses in the studies, it would be very dangerous to state
this as a law of software engineering. You only need to change the amount of testing you do
to 'buck' this law. If you do not test or use a module you will not observe faults or failures
associated with it. Again this is because the association between size and fault density is not
a causal one. It is for this kind of reason that we recommend more complete models that
enable us to augment the empirical observations with other explanatory factors, most
notably, testing effort and operational usage. In this sense our results justify the recent work
on building causal models of software quality using Bayesian Belief Networks, rather than
traditional statistical methods which are patently inappropriate for defects prediction.[Neil
and Fenton 1996].
In the case study system described in this paper, the data-collection activity is considered to
be a part of routine configuration management and quality assurance. We have used this data
to shed light on a number of issues that are central to the software engineering discipline. If
more companies shared this kind of data, the software engineering discipline could quickly
establish the empirical and scientific basis that it so sorely lacks.
Acknowledgements
We are indebted to Martin Neil for his valuable input to this work and to Pierre-Jacques
Courtois, Karama Kanoun, Jean-Claude Laprie, and Stuart Mitchell for their valuable
review comments. The work was supported, in part, by the EPSRC-funded project
IMPRESS, the ESPRIT-funded projects DEVA and SERENE, the Swedish National Board
for Industrial and Technical Development, and Ericsson Utvecklings AB.
--R
Project Software Defects From Analyzing Ada Designs
Estimating the fault content of software suing the fix-on-fix model
Prediction and Control of ADA Software Defects
An integrated approach to criticality prediction.
A Rigorous and Practical Approach (2nd Edition)
tapping the wheels of software
Design metrics and aids to their automatic collection
Software structure metrics based on information flow.
Reliability of a Telecommunications System
Software Reliability Analysis of Three Successive Generations of a Switching System
Software Dependability of a Telephone Switching System
Experience in Software Reliability: From Data Collection to Quantitative Evaluation
Early quality prediction: a case study in telecommunications.
The effects of inspections on software quality and productivity
An evaluation of some design metrics.
Predicting software quality using Bayesian belief networks
Predicting error-prone software modules in telephone switches
Using formal description techniques - An introduction to ESTELLE
'Research on structured programming: an empiricist's evaluation'
Product development and quality in the software factory
An analysis of several software defect models
Software Complexity: Measures and Methods
--TR
--CTR
T. Wang , A. Hassan , A. Guedem , W. Abdelmoez , K. Goseva-Popstojanova , H. Ammar, Architectural level risk assessment tool based on UML specifications, Proceedings of the 25th International Conference on Software Engineering, May 03-10, 2003, Portland, Oregon
Norman Fenton , Paul Krause , Martin Neil, Software Measurement: Uncertainty and Causal Modeling, IEEE Software, v.19 n.4, p.116-122, July 2002
Mechelle Gittens , Hanan Lutfiyya , Michael Bauer , David Godwin , Yong Woo Kim , Pramod Gupta, An empirical evaluation of system and regression testing, Proceedings of the 2002 conference of the Centre for Advanced Studies on Collaborative research, p.3, September 30-October 03, 2002, Toronto, Ontario, Canada
Thomas J. Ostrand , Elaine J. Weyuker, The distribution of faults in a large industrial software system, ACM SIGSOFT Software Engineering Notes, v.27 n.4, July 2002
Parastoo Mohagheghi , Reidar Conradi , Ole M. Killi , Henrik Schwarz, An Empirical Study of Software Reuse vs. Defect-Density and Stability, Proceedings of the 26th International Conference on Software Engineering, p.282-292, May 23-28, 2004
Thomas J. Ostrand , Elaine J. Weyuker , Robert M. Bell, Predicting the Location and Number of Faults in Large Software Systems, IEEE Transactions on Software Engineering, v.31 n.4, p.340-355, April 2005
Thomas J. Ostrand , Elaine J. Weyuker , Robert M. Bell, Where the bugs are, ACM SIGSOFT Software Engineering Notes, v.29 n.4, July 2004
Piotr Tomaszewski , Lars Lundberg , Hkan Grahn, Improving fault detection in modified code: a study from the telecommunication industry, Journal of Computer Science and Technology, v.22 n.3, p.397-409, May 2007
Anneliese Andrews , Catherine Stringfellow, Quantitative Analysis of Development Defects to Guide Testing: A Case Study, Software Quality Control, v.9 n.3, p.195-214, November 2001
Patrick Knab , Martin Pinzger , Abraham Bernstein, Predicting defect densities in source code files with decision tree learners, Proceedings of the 2006 international workshop on Mining software repositories, May 22-23, 2006, Shanghai, China
Gerard J. Holzmann, Economics of software verification, Proceedings of the 2001 ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering, p.80-89, June 2001, Snowbird, Utah, United States
Giovanni Denaro , Mauro Pezz, An empirical evaluation of fault-proneness models, Proceedings of the 24th International Conference on Software Engineering, May 19-25, 2002, Orlando, Florida
Robert M. Bell , Thomas J. Ostrand , Elaine J. Weyuker, Looking for bugs in all the right places, Proceedings of the 2006 international symposium on Software testing and analysis, July 17-20, 2006, Portland, Maine, USA
Robyn R. Lutz , Ines Carmen Mikulski, Operational anomalies as a cause of safety-critical requirements evolution, Journal of Systems and Software, v.65 n.2, p.155-161, 15 February
Austen Rainer , Dorota Jagielska , Tracy Hall, Software engineering practice versus evidence-based software engineering research, ACM SIGSOFT Software Engineering Notes, v.30 n.4, July 2005
Elaine J. Weyuker, Using operational distributions to judge testing progress, Proceedings of the ACM symposium on Applied computing, March 09-12, 2003, Melbourne, Florida
Nachiappan Nagappan , Thomas Ball, Use of relative code churn measures to predict system defect density, Proceedings of the 27th international conference on Software engineering, May 15-21, 2005, St. Louis, MO, USA
Sajjad Mahmood , Richard Lai, A complexity measure for UML component-based system specification, SoftwarePractice & Experience, v.38 n.2, p.117-134, February 2008
A. Gne Koru , Jeff Tian, An empirical comparison and characterization of high defect and high complexity modules, Journal of Systems and Software, v.67 n.3, p.153-163, 15 September
Philip J. Boland , Harshinder Singh , Bojan Cukic, Comparing Partition and Random Testing via Majorization and Schur Functions, IEEE Transactions on Software Engineering, v.29 n.1, p.88-94, January
Thomas J. Ostrand , Elaine J. Weyuker , Robert M. Bell, Automating algorithms for the identification of fault-prone files, Proceedings of the 2007 international symposium on Software testing and analysis, July 09-12, 2007, London, United Kingdom
Adrian Schrter , Thomas Zimmermann , Andreas Zeller, Predicting component failures at design time, Proceedings of the 2006 ACM/IEEE international symposium on International symposium on empirical software engineering, September 21-22, 2006, Rio de Janeiro, Brazil
Elaine J. Weyuker , Thomas J. Ostrand , Robert M. Bell, Using Developer Information as a Factor for Fault Prediction, Proceedings of the Third International Workshop on Predictor Models in Software Engineering, p.8, May 20-26, 2007
M. P. Ware , F. G. Wilkie , M. Shapcott, The application of product measures in directing software maintenance activity, Journal of Software Maintenance and Evolution: Research and Practice, v.19 n.2, p.133-154, March 2007
Gwendolyn H. Walton , Robert M. Patton , Douglas J. Parsons, Usage testing of military simulation systems, Proceedings of the 33nd conference on Winter simulation, December 09-12, 2001, Arlington, Virginia
Andy Chou , Junfeng Yang , Benjamin Chelf , Seth Hallem , Dawson Engler, An empirical study of operating systems errors, ACM SIGOPS Operating Systems Review, v.35 n.5, Dec. 2001
Yong Woo Kim, Efficient use of code coverage in large-scale software development, Proceedings of the conference of the Centre for Advanced Studies on Collaborative research, p.145-155, October 06-09, 2003, Toronto, Ontario, Canada
Mohammad Alshayeb , Wei Li, An empirical study of system design instability metric and design evolution in an agile software process, Journal of Systems and Software, v.74 n.3, p.269-274, February 2005
Wei Li , Raed Shatnawi, An empirical study of the bad smells and class error probability in the post-release object-oriented system evolution, Journal of Systems and Software, v.80 n.7, p.1120-1128, July, 2007
A. Gunes Koru , Dongsong Zhang , Hongfang Liu, Modeling the Effect of Size on Defect Proneness for Open-Source Software, Proceedings of the Third International Workshop on Predictor Models in Software Engineering, p.10, May 20-26, 2007
Norman Fenton , William Marsh , Martin Neil , Patrick Cates , Simon Forey , Manesh Tailor, Making Resource Decisions for Software Projects, Proceedings of the 26th International Conference on Software Engineering, p.397-406, May 23-28, 2004
Piotr Tomaszewski , Jim Hkansson , Hkan Grahn , Lars Lundberg, Statistical models vs. expert estimation for fault prediction in modified code - an industrial case study, Journal of Systems and Software, v.80 n.8, p.1227-1238, August, 2007
Salah Bouktif , Houari Sahraoui , Giuliano Antoniol, Simulated annealing for improving software quality prediction, Proceedings of the 8th annual conference on Genetic and evolutionary computation, July 08-12, 2006, Seattle, Washington, USA
A. Gunes Koru , Jeff (Jianhui) Tian, Comparing High-Change Modules and Modules with the Highest Measurement Values in Two Large-Scale Open-Source Products, IEEE Transactions on Software Engineering, v.31 n.8, p.625-642, August 2005
Zhenmin Li , Lin Tan , Xuanhui Wang , Shan Lu , Yuanyuan Zhou , Chengxiang Zhai, Have things changed now?: an empirical study of bug characteristics in modern open source software, Proceedings of the 1st workshop on Architectural and system support for improving software dependability, p.25-33, October 21-21, 2006, San Jose, California
Raimund Moser , Barbara Russo , Giancarlo Succi, Empirical analysis on the correlation between GCC compiler warnings and revision numbers of source files in five industrial software projects, Empirical Software Engineering, v.12 n.3, p.295-310, June 2007
Matthew S. Harrison , Gwendolyn H. Walton, Identifying high maintenance legacy software, Journal of Software Maintenance: Research and Practice, v.14 n.6, p.429-446, November 2002
Michael Ellims , James Bridges , Darrel C. Ince, The Economics of Unit Testing, Empirical Software Engineering, v.11 n.1, p.5-31, March 2006
Khaled El Emam , Sada Benlarbi , Nishith Goel , Walcelio Melo , Hakim Lounis , Shesh N. Rai, The Optimal Class Size for Object-Oriented Software, IEEE Transactions on Software Engineering, v.28 n.5, p.494-509, May 2002
Lenin Singaravelu , Calton Pu , Hermann Hrtig , Christian Helmuth, Reducing TCB complexity for security-sensitive applications: three case studies, ACM SIGOPS Operating Systems Review, v.40 n.4, October 2006
Avner Engel , Mark Last, Modeling software testing costs and risks using fuzzy logic paradigm, Journal of Systems and Software, v.80 n.6, p.817-835, June, 2007
Norman E. Fenton , Martin Neil, Software metrics: roadmap, Proceedings of the Conference on The Future of Software Engineering, p.357-370, June 04-11, 2000, Limerick, Ireland
Mohammad Alshayeb , Wei Li, An Empirical Validation of Object-Oriented Metrics in Two Different Iterative Software Processes, IEEE Transactions on Software Engineering, v.29 n.11, p.1043-1049, November
Yair Wiseman, Advanced non-distributed operating systems course, ACM SIGCSE Bulletin, v.37 n.2, June 2005
Norman E. Fenton , Martin Neil, A Critique of Software Defect Prediction Models, IEEE Transactions on Software Engineering, v.25 n.5, p.675-689, September 1999
Kalhed El Emam , Sada Benlarbi , Nishith Goel , Shesh N. Rai, The Confounding Effect of Class Size on the Validity of Object-Oriented Metrics, IEEE Transactions on Software Engineering, v.27 n.7, p.630-650, July 2001
Niels Veerman , Ernst-Jan Verhoeven, Cobol minefield detection, SoftwarePractice & Experience, v.36 n.14, p.1605-1642, November 2006
Khaled El-Emam, Object-oriented metrics: A review of theory and practice, Advances in software engineering, Springer-Verlag New York, Inc., New York, NY, 2002 | software faults and failures;empirical studies;software metrics |
631253 | Architecture-Based Performance Analysis Applied to a Telecommunication System. | AbstractSoftware architecture plays an important role in determining software quality characteristics, such as maintainability, reliability, reusability, and performance. Performance effects of architectural decisions can be evaluated at an early stage by constructing and analyzing quantitative performance models, which capture the interactions between the main components of the system as well as the performance attributes of the components themselves. This paper proposes a systematic approach to building Layered Queueing Network (LQN) performance models from a UML description of the high-level architecture of a system and more exactly from the architectural patterns used for the system. The performance model structure retains a clear relationship with the system architecture, which simplifies the task of converting performance analysis results into conclusions and recommendations related to the software architecture. In the second part of the paper, the proposed approach is applied to a telecommunication product for which an LQN model is built and analyzed. The analysis shows how the performance bottleneck is moving from component to component (hardware or software) under different loads and configurations and exposes some weaknesses in the original software architecture, which prevent the system from using the available processing power at full capacity due to excessive serialization. | Introduction
Performance characteristics (such as response time and throughput) are an integral part of the
quality attributes of a software system. There is a growing body of research that studies the role
of software architecture in determining different quality characteristics in general [12], [1], and
performance characteristics in special [15], [16]. Architectural decisions are made very early in
the software development process, therefore it would be helpful to be able to assess their effect
on software performance as soon as possible.
This paper contributes toward bridging the gap between software architecture and early
performance analysis. It proposes a systematic approach to building performance models from
the high-level software architecture of a system, which describes the main system components
and their interactions. The architectural descriptions on which the construction of a performance
model is based must capture certain issues relevant to performance, such as concurrency and
parallelism, contention for software resources (as, for example, for software servers or critical
sections), synchronization and serialization, etc.
Frequently used architectural solutions are identified in literature as architectural patterns (such
as pipeline and filters, client/server, client/broker/server, layers, master-slave, blackboard, etc.)
[3], [12]. A pattern introduces a higher-level of abstraction design artifact by describing a
specific type of collaboration between a set of prototypical components playing well-defined
roles, and helps our understanding of complex systems. The paper proposes a systematic
approach to building a performance model by transforming each architectural pattern employed
in a system into a performance sub-model. The advantage of using patterns is that they are
already identified and catalogued, so we can build a library of transformation rules for
converting patterns to performance models. If, however, not all components and interactions of
a high-level architecture are covered by previously identified architectural patterns, we can still
describe the remaining interactions as UML mechanisms [2] and proceed by defining ad-hoc
transformations into performance models.
The formalism used for building performance models is the Layered Queueing Network (LQN)
model [11, 17, 18], an extension of the well-known Queueing Network model. LQN was
developed especially for modelling concurrent and/or distributed software systems. Some LQN
components represent software processes, others hardware devices. One of the most interesting
performance characteristics of such systems is that a software process may play a dual role,
acting both as a client to some processes /devices, and as a server to others (see section 3 for a
more detailed description). Since a software server may have many clients, important queueing
delays may arise for it. The server may become a software bottleneck, thus limiting the potential
performance of the system. This can occur even if the devices used by the process are not fully
utilized. The analysis of an LQN model produces results such as response time, throughput,
queueing delays and utilization of different software and hardware components, and indicates
which components are the system bottleneck(s). By understanding the cause for performance
limitations, the developers will be able to concentrate on the system's ``trouble spots'' in order to
eliminate or mitigate the bottlenecks. The analysis of LQN models for various alternatives will
help in choosing the "right" changes, so that the system will eventually meet its performance
requirements.
Software Performance Engineering (SPE) is a technique introduced in [14] that proposes to use
quantitative methods and performance models in order to assess the performance effects of
different design and implementation alternatives, from the earliest stages of software
development throughout the whole lifecycle. LQN modelling is very appropriate for such a use,
due to fact that the model structure can be derived systematically from the high-level architecture
of the system, as proposed in this paper. Since the high-level architecture is decided early in the
development process and does not change frequently afterwards, the structure of the LQN model
is also quite stable. However, the LQN model parameters (such execution times of the high-level
architectural components on behalf of different types of system requests) depend on low-level
design and implementation decisions. In the early development stages, the parameter values are
estimations based on previous experience with similar systems, on measurement of reusable
components, on known platform overheads (such as system call execution times) and on time
budgets allocated to different components. As the development progresses and more components
are implemented and measured, the model parameters become more accurate, and so do the
results. In [14] it is shown that early performance modelling has definite advantages, despite its
inaccurate results, especially when the model and its parameters are continuously refined
throughout the software lifecycle.
LQN was applied to a number of concrete industrial systems (such as database applications [5],
web servers [7], telecommunication systems, etc.) and was proven to be useful for providing
insights into performance limitations at software and hardware levels, for suggesting
performance improvements in different development stages, for system sizing and for capacity
planning. In this paper, LQN is applied to a real telecommunication system. Although the
structure of the LQN model was derived from the high-level architecture of the system, which
was chosen in the early development stage, we used model parameters obtained from prototype
measurements, which are more accurate than the estimated values available in pre-implementation
phases. The reason is that we became involved with the project when the system
was undergoing performance tuning, and so we used the best data available to analyze the high-level
5architecture of the system (which was unchanged from the early design stages). We found
some weaknesses in the original architecture due to excessive serialization, and used the LQN
model to assess different architectural alternatives, in order to improve the performance by
removing or mitigating software bottlenecks.
The paper proceeds as follows: section 2 discusses architectural patterns and the UML notation
[2] used to represent them. Section 3 gives a brief description of the LQN model. Section 4
proposes transformations of the architectural patterns into LQN sub-models. Section 5 presents
the telecommunication system case study and its LQN model. Section 6 analyzes the LQN model
under various loads and configurations, shows how the bottleneck moves around the system and
proposes improvements to the system. Section7 gives the conclusions of the work.
3. architectural Patterns
According to [1], a software architecture represents a collection of computational components
that perform certain functions, together with a collection of connectors that describe the
interactions between components. A component type is described by a specification defining its
functions, and by a set of ports representing logical points of interaction between the component
and its environment. A connector type is defined as a set of roles explaining the expected
behaviour of the interacting parties, and a glue specification showing how the interactions are
coordinated.
A similar, even though less formal, view of a software architecture is described in the form of
architectural patterns [3, 13] which identify frequently used architectural solutions, such as
pipeline and filters, client/server, client/broker/server, master-slave, blackboard, etc. Each
architectural pattern describes two inter-related aspects: its structure (what are the components)
and behaviour (how they interact). In the case of high-level architectural patterns, the
components are usually concurrent entities that execute in different threads of control, compete
for resources, and interact in a prescribed manner which may require some kind of
synchronization. These are aspects that contribute to the performance characteristics of the
system, and therefore must be captured in the performance model.
The paper proposes to use high-level architectural patterns as a basis for translating software
architecture into performance models. A subset of such patterns, which are later used in the case
study, are described in the paper in the form of UML collaborations (not to be confused with
UML collaboration diagrams, a type of interaction diagrams). According to the authors of UML,
a collaboration is a notation for describing a mechanism or pattern, which represents "a society
filter1
UpStreamFilter
filter2
DownStreamFilter
UpStreamFilter
DownStreamFilter
filter1 filter2
wait for
item
proc_item()
wait for
next item
proc_item()
Figure
1. Structural and behavioural views of the collaboration for PIPELINE WITH MESSAGE
filter1 buffer filter2
proc_item()
return
read() {sequential}
buffer
filter1
1.n
filter2
1.n
WITH BUFFER
UpStreamFilter DownStreamFilter
Buffer
UpStreamFilter
DownStreamFilter
Buffer
return
read()
wait for
item
wait for
next item
Figure
2. Structural and behavioural view of the collaboration PIPELINE WITH BUFFER
Figure
of classes, interfaces, and other elements that work together to provide some cooperative
behaviour that is bigger than the sum of all of its parts" [2]. A collaboration has two aspects:
structural and behavioural. Fig. 1 and 2 illustrate these aspects for two alternatives of the pipeline
and filters pattern. Each figures contains on the left a UML class/object diagram describing the
pattern structure, and on the right a UML sequence diagram illustrating the pattern behaviour. A
brief explanation of the UML notation used in the paper is given below (see [2] for more details).
The notation for a class or object is a rectangle indicating the class/object name (the name is
underlined for objects); the rectangle may contain optionally a section for the class/object
operations, and another one for its attributes. The multiplicity of the class/object is represented in
the upper right corner. A rectangle with thick lines represents an active class/object, which has
its own thread of control, whereas a rectangle with thin lines represents a passive one. An active
object may be implemented either as a process or as a thread (identified by the stereotype
< > or < >, respectively). The constraint {sequential} attached to the operations of
a passive object, as in Fig. 2, indicates that the callers must coordinate outside the passive object
(for example, by the means of a semaphore) so that only one calls the passive object's operations
at any given time. The UML symbol for collaboration is an ellipse with a dashed line that may
have an "embedded" rectangle showing template classes. The collaboration symbol is connected
with the classes/objects with dashed lines, whose labels indicate the roles played by each
component. A line connecting two objects, named link, represents a relationship between the two
objects which interact by exchanging messages. Depending on the kind of interacting objects
(passive or active), UML messages may represent either operation calls, or actual messages sent
between different flows of control. Links between objects may be optionally annotated with
arrows showing the name and type of messages exchanged. For example, in Fig.1 an arrow with
a half arrowhead between the active objects filter1 and filter2 represents an asynchronous message,
whereas in Fig.2 the arrows with filled solid arrowheads labeled "write()" and "read()" represent
synchronous messages implemented as calls to the operations indicated by the label. When
relevant, the "object flow" carried by a message is represented by a little arrow with a circle (as
in Fig.2), while the message itself is an arrow without circle. A synchronous message implies a
reply, therefore it can carry objects in both directions. For example, in Fig.2, the object flow
carried by the message read() goes in the reverse direction than the message itself.
A sequence diagram, such as on the right side of Fig.1 and 2, shows the messages exchanged
between a set of objects in chronological order. The objects are arranged along the horizontal
axis, and the time grows along the vertical axis, from top to bottom. Each object has a lifeline
running parallel with the time axis. On the lifeline one can indicate the period of time during
which an object is performing an action as a tall thin rectangle called "focus of control", or the
state of the object as a rectangle with rounded corners called "state mark". The messages
exchanged between objects (which can be asynchronous or synchronous) are represented as
horizontal directed lines. An object can also send a message to itself, which means that one of its
operations invokes another operation of the same object.
Architectures using the pipeline and filters pattern divide the overall processing task into a
number of sequential steps which are implemented as filters, while the data between filters flows
through unidirectional pipes. We are interested here in active filters [3] that are running
concurrently. Each filter is implemented as a process or thread that loops through the following
steps: "pulls" the data (if any) from the preceding pipe, processes it, then "pushes" the results
down the pipeline. The way in which the push and pull operations are implemented may have
performance consequences. In Fig.1 the filters communicate through asynchronous messages. A
filter "pulls" an item by accepting the message sent by the previous filter, processes the item by
invoking its own operation proc_item(), passes the data on to the next filter by sending an
asynchronous message, after which goes into a waiting state for the next item. In Fig.2, the filters
communicate through a shared buffer (one pushes by writing to the buffer, and the other pulls by
reading it). Whereas the filters are active objects with a multiplicity of one or higher, the buffer
itself is a passive object that offers two operations, read() and write(), which must be used one at a
time (as indicated by the constraint {sequential} ).
When defining the transformations from architectural patterns into LQN sub-models, we use
both the structural and the behavioural aspect of the respective collaborations. The structural part
is used directly, in the sense that each software component has counterpart(s) in the structure of
the LQN model (the mapping is not bijective). However, the behavioural part is used indirectly,
in the sense that it is matched by the behaviour of the LQN model, but is not represented
graphically.
3. LQN Model
LQN was developed as an extension of the well-known Queueing Network (QN) model, at first
independently in [17, 18] and [11], then as a joint effort [4]. The LQN toolset presented in [4]
includes both simulation and analytical solvers that merge the best previous approaches. The
main difference of LQN with respect to QN is that a server, to which customer requests are
arriving and queueing for service, may become a client to other servers from which it requires
nested services while serving its own clients. An LQN model is represented as an acyclic graph
whose nodes (named also tasks) are software entities and hardware devices, and whose arcs
denote service requests (see Fig.3). The software entities are drawn as parallelograms, and the
hardware devices as circles. The nodes with outgoing and no incoming arcs play the role of pure
clients. The intermediate nodes with incoming and
outgoing arcs play both the role of client and of
server, and usually represent software components.
The leaf nodes are pure servers, and usually
servers (such as processors, I/O
devices, communication network, etc.) A software
or hardware server node can be either a single-server
or a multi-server (composed of more than one
identical servers that work in parallel and share the
same request queue). A LQN task may offer more than one kind of service, each modelled by a
so-called entry, drawn as a parallelogram "slice". An entry has its own execution time and
demands for other services (given as model parameters). Although not explicitly illustrated in
the LQN notation, each server has an implicit message queue, where the incoming requests are
waiting their turn to be served. Servers with more then one entry still have a single input queue,
where requests for different entries wait together. The default scheduling policy of the queue is
FIFO, but other policies are also supported. Fig. 3 shows a simple example of an LQN model for
a three-tiered client/server system: at the top there are two client classes, each with a known
number of stochastic identical clients. Each client sends requests for a specific service offered by
a task named Application, which represents the business layer of the system. Each Application
entry requires services from two different entries of the Database task, which offers in total three
kinds of services. Every software task is running on a processor node, drawn as a circle; in the
example, all clients of the same class share a processor, whereas Application and Database share
another processor. Database uses also two disk devices, as shown in Fig.3. It is worth mentioning
Application
Database
Proc2
Proc1
Proc3
Figure
3. Simple LQN model
that the word layered in the name of LQN does not imply a strict layering of the tasks (for
example, a task may call other tasks in the same layer, or skip over layers).
There are three types of LQN messages, synchronous, asynchronous and forwarding, whose
effect is illustrated in Fig.4. A synchronous message represents a request for service sent by a
client to a server, where the client remains blocked until it receives a reply from the provider of
service (see Fig.4.a). If the server is busy when a request arrives, the request is queued and waits
phase1 (service)
Client
Server
synchronous
message
busy
reply
included services
phase2 phase3
(autonomous phases)
idle
Client
Server
a) LQN synchronous message
Client
Server
busy
included services
phase1
phase2 phase3
idle
asynchronous message
Client
Server
asynchronous
message
forwarding
Client
synchronous
message
reply to
original client
busy phase1 phase2 idle
Client
c) LQN forwarding message
busy idle phase1 phase2 idle
Figure
4. Different types of LQN messages
its turn. After accepting a message, the server starts to serve it by executing a sequence of phases
(one or more). At the end of phase 1, the server replies to the client, which is unblocked and
continues its work. The server continues with the following phases, if any, working in parallel
with the client, until the completion of the last phase. (In Fig.4.a, a case with three phases is
shown). After finishing the last phase, the server begins to serve a new request from the queue,
or becomes idle if the queue is empty. During any phase, the server may act as a client to other
servers, asking for and receiving so-called "included services". In the case of an asynchronous
message, the client does not block after sending the message, and the server does not reply back,
only executes its phases, as shown in Fig.4b. The third type of LQN message, named forwarding
message (represented as a dotted request arc) is associated with a synchronous request that is
served by a chain of servers, as illustrated in Fig. 4.c. The client sends a synchronous request to
which begins to process the request, then forwards it to Server2 at the end of phase1.
proceeds normally with the remaining phases in parallel with Server2, then goes on to
another cycle. The client, however, remains blocked until the forwarded request is served by
which replies to the client at the end of its phase 1. A forwarding chain can contain any
number of servers, in which case the client waits until it receives a reply from the last server in
the chain.
The parameters of a LQN model are as follows:
customer (client) classes and their associated populations or arrival rates;
- for each software task entry: average execution time per phase;
- for each software task entry seen as a client to a device (i.e., for each request arc from
a task entry to a device): average service time at the device, and average number of
visits per phase of the requesting entry;
- for each software task entry seen as a client to another task entry (i.e., for each
request arc from a task entry to another task entry): average number of visits per
phase of the requesting entry;
- for each request arc: average message delay;
- for each software and hardware server: scheduling discipline.
Typical results of an LQN model are response times, throughput, utilization of servers on behalf
of different types of requests, and queueing delays. The LQN results may be used to identify the
software and/or hardware bottlenecks that limit the system performance under different
workloads and configurations. Understanding the cause for performance limitations helps the
development team to come up with appropriate remedies.
4. Transformation from architecture to performance models
A software system contains many components involved in various architectural connection
instances (each described by a pattern/collaboration), and a component may play different roles
in connections of various types. The transformation of the architecture into a performance model
is done in a systematic way, pattern by pattern. As expected, the performance of the system
depends on the performance attributes of its components and on their interactions (as described
by patterns/collaborations). Performance attributes are not central to the software architecture
itself, and must be supplied by the user as additional information. They describe the demands for
hardware resources by the software components: allocation of processes to processors, execution
time demands for each software component on behalf of different types of system requests,
demands for other resources such as I/O devices, communication networks, etc. We will specify
more clearly what kind of performance attributes must be provided for each
pattern/collaboration. The tranformations from the architecture to the performance modelling
domain are discussed next.
LQN model for Pipeline and Filters. Fig. 5 and 6 show the transformation into LQN
submodels of the two Pipeline and Filters collaborations described in Fig.1 and 2, respectively.
The translation takes into account, on one side, the structural and behavioural information
provided by the UML collaboration, and on the other side the allocation of software components
filter1 filter2
filter1
UpStreamFilter
filter2
DownStreamFilter
UpStreamFilter
DownStreamFilter
Figure
5. Transformation of the PIPELINE WITH MESSAGE into an LQN submodel
filter1 filter2
semaphore
and buffer
read
proc
a) All filters are running on the
same processor node
filter1 filter2
semaphore
proc1 proc2
read
b) The filters are running on different
processor nodes
read() {sequential}
buffer
filter1
1.n
filter2
1.n
WITH BUFFER
UpStreamFilter DownStreamFilter
Buffer
UpStreamFilter
DownStreamFilter
Buffer
Figure
6. Transformation of the PIPELINE WITH BUFFER into an LQN submodel
to processors, which will lead to different LQN submodels for the same pattern (see Fig.6). The
tansformation rules are as follows:
a) Each active filter from Fig.5 and 6 becomes an LQN software server with a single entry,
whose service time includes the processing time of the filter. The filter tasks will receive an
asynchronous message (described in Fig. 4.b) and will execute its phases in response to it. A
typical distribution of the work into phases is to receive the message in phase 1, to process it in
phase 2, and to send it to the next filter in phase 3.
b) The allocation of LQN tasks to processors mimics the real system. The way the filters are
allocated on the same or on different processor nodes does not make any difference for the
pipeline with message (reason for which the processors are not represented in Fig.5), but it
affects the model for a pipeline with buffer, as explained below.
c) In the case of a pipeline with message aspects related to the pipeline connector
between the two filters are completely modelled by the LQN asynchronous message. The CPU
times for send and receive system calls are added to the execution times of the phases in which
the respective operations take place. If we want to model a network delay for the message, it can
be represented by a delay attached to the request arc.
d) In the case of a pipeline with buffer (Fig.6), an asynchronous LQN arc is necessary, but is not
sufficient to model all the aspects concerning the pipeline connector. Additional LQN elements
are required to take into account the serialization delay introduced by the constraint that buffer
operations must be mutually exclusive. A third task that plays the role of semaphore will enforce
this constraint, due to the fact that any task serializes the execution of its entries. The task has as
many entries as the number of critical sections executed by the filters accessing the buffer (two
in this case, "write" and "read"). Since the execution of each buffer operation takes place in the
thread of control of the filter initiating the operation, the allocation of filters to processors
matters. If both filters are running on the same processor node (which may have more than one
processor) as in Fig.6.a, then the read and write operations will be executed on the same processor
node. Thus, they can be modelled as entries of the semaphore task that is, obviously, co-allocated
with the filters. If, however, the filters are running on different processor nodes, as in Fig.6.b, the
mutual-exclusive operations read and write will be executed on different processor nodes, so they
cannot be modelled as entries of the same task. (In LQN, all entries of a task are executed on the
same processor node). The solution is shown in Fig.6.b: we keep the semaphore task for
enforcing the mutual exclusion, but its entries are only used to delegate the work to two new
tasks, each one responsible for a buffer operation. Each new task is allocated on the same
processor as the filter initiating the respective operation.
The LQN models for both pipeline and filters collaborations from Fig. 5 and 6 can be generated
with a forwarding message (as in Fig.4.c) instead of an asynchronous one (as in Fig.4.b), if the
source of requests for the first filter in a multi-filter architecture is closed instead of open. A
closed source is composed of a set of client tasks sending synchronous requests to the first filter,
and waiting for a reply from the last filter. Since a LQN task may send a forwarding message
exactly at the end of its phase1, all the work done by a filter task must take place in the first
phase.
Client-Server Pattern is very frequently used in today's distributed systems. In Fig.7 is
illustrated a case where the client communicates directly with the server through synchronous
requests (described in Fig.4.a). A server may offer a wide range of services (represented in the
architectural view as the server's class methods) each one with its own performance attributes.
From a performance modelling point of view it is important not only to identify theses services,
but also to find out who is invoking them and how frequently. The UML class diagram contains
a single association between a client and a server, no matter how many server methods the client
may invoke. Therefore, we indicate here in addition to the line that represents the client-server
association, the messages sent by the client to the server (used mostly in collaboration diagrams)
to indicate all the services a client will invoke at one time or another.
There are other ways in which client/server connections may be realized, which are not
described in the paper as they do not apply to our case study. A well-known example is the use
of midware technology, such as CORBA, to interconnects clients and servers running on
heterogeneous platforms across local or wide-area networks. CORBA connections introduce
very interesting performance implications and modelling issues [9].
LQN was originally created to model client/server systems, so the transformation from the client-server
pattern to LQN is quite straightforward. An LQN server may offer a range of services
(object methods in the architectural view), each with its own CPU time and number of visits to
other servers (these are performance attributes that must be provided). Each service is modelled
as an entry of the server task, as shown in Figure 7, and will contribute differently to the
clientclientserver
service1 service2
server
client2
1.n
client1
1.n
Client Client
Server
CLIENT SERVER
Client
Server
Figure
7. Transformation of the client/server pattern into a LQN submodel
response time, utilization, and throughput of the server. A client may invoke more than one of
these services at different times. The performance attributes for the clients include their average
time demands, and the average number of calls for each entry of the server. As in the
pipeline connection case, the CPU time required to execute the system call for send/receive/reply
are added to the service times of the corresponding entries. The allocation of tasks to processors
is not shown in Fig. 7, because the transformation does not depend on it. Each LQN task is
allocated exactly as its architectural component counterpart.
Critical section. This is a collaboration at a lower-level of abstraction than the previous
architectural patterns, but very frequently used. It describes the case where two or more active
objects share the same passive object. The constraint {sequential} attached to the methods of the
shared object indicates that the callers must coordinate outside the shared object (for example, by
the means of a semaphore) to insure correct behaviour. Such synchronization introduces
performance delays, and must be represented in the LQN model. For simplicity reasons, Fig.8
illustrates a case where each user invokes only a method of the shared object, but this can be
extended easily to allow each user to call a subset of methods.
The transformation of the critical section collaboration produces either the model given in Fig.
8.a or in Fig. 8.b, depending on the allocation of user processes to processor nodes (similar to the
pipeline with buffer case). The premise is that the shared object operations are mutually
exclusive, that an LQN task cannot change its processor node, and that all the entries of a task
are executed on the task's processor node. In the case where all users are running on the same
processor node, the shared object operations can be modelled as entries of a task that plays the
role of semaphore (see Fig.8.a), which is running on the same processor node as the users. The
generalization for allowing a user to call a subset of operations (entries) is straightforward: the
user is connected by a request arcs to every entry in the subset.
If the users are running on different processor nodes as in Fig. 8.b, then the shared object
operations (i.e., critical sections) are executed by different threads of controls corresponding to
different users that are running on different processors. Therefore each operation is modelled as
an entry of a new task responsible for that operation that is running on its user's node. (If a user
is to call more shared operations, its new associated task will have an entry for every such
operation. This means that an operation called by more than one user will be represented by
more than one entry.) However, these new tasks must be prevented from running simultaneously,
shared
userN
user1
Accessor Accessor
Shared
CRITICAL SECTION
Accessor
Shared
user1 userN
proc
semaphore and
critical sections
f1 f2 . fN
user1 userN
semaphore
proc1 procN
a) All users are running on the same
processor node
b) The users are running on different
processor nodes
Figure
8. Transformation of the critical section collaboration to a LQN submodel
so a semaphore task, with one entry for each user, is used to enforce the mutual exclusion. An
entry of the semaphore task delegates the work to the entries modelling the required operations.
The performance attributes to be provided for each user must specify the average execution times
for each user outside and inside the critical section separately.
Co-allocation collaboration. Fig. 9 shows a the so-called co-allocation collaboration, where two
active objects are contained in a third active object, and are constrained to execute only one at a
time. The container object may be implemented as a process. This is an example of architectural
connection from our case-study system, which is not necessarily an architectural pattern, but is
quite frequently used. The most obvious solution to model the two contained objects as entries of
the same task presents a disadvantage: it cannot represent the case where each of the two
contained objects has its own request queue. (An LQN task has a unique message queue, where
requests for all entries are waiting together). One reason for which we may need separate queues
is to avoid cyclic graphs, which could not be accepted by the LQN solver used for this paper.
The solution presented in Fig.9 represents each contained active object as a separate "dummy"
task that delegates all the work to an entry of the container task, which serializes all its entries.
The dummy tasks are allocated on a dummy processor (not to interfere with the scheduling of the
"real" processor node).
dummyActive1 dummyActive2
container
active1 active2
dummy
proc
proc
active_container
active1
active2
Contained
COALLOCATION
Container
Contained
Contained Container
Figure
9. LQN submodel of COALLOCATION collaboration
5. LQN Model of a Telecommunication System
We conducted performance modelling and analysis of an existing telecommunication system
which is responsible for developing, provisioning and maintaining various intelligent network
services, as well as for accepting and processing real-time requests for these services. According
to the Software Performance Engineering methodology [14], we first identified the critical
scenarios with the most stringent performance constraints (which correspond in this case to real-time
processing of service requests). Next we identified the software components involved in,
and the architectural patterns exercised by the execution of the chosen scenarios (see Fig.10).
CLIENT SERVER
Client
Server
WITH BUFFER
UpStrmFilter
DownStrmFilter
Buffer
UpStrmFilter
DownStrmFilter
COALLOCATION
Container
Contained
RequestHandler
1.n
IO
IOin
IOout
Stack
StackIn
StackOut
doubleBuffer
inBuffer
outBuffer
COALLOCATION
Container
Contained
WITH BUFFER
UpStrmFilter
DownStrmFilter
Buffer
UpStrmFilter
DownStrmFilter
CRITICAL SECTION
Accessor
Shared
CRITICAL SECTION
Accessor
Shared
DataBase
Figure
10. UML model of a telecommunication system
The real time scenario we have modelled starts from the moment a request arrives to the system
and ends after the service was completely processed and a reply was sent back. As shown in Fig.
10, a request is passed through several filters of a pipeline: from Stack process to IO process to
RequestHandler and all the way back. The main processing is done by the RequestHandler (as it
can be seen from Fig.12 and Table 1), which accesses a real-time database to fetch an execution
"script" for the desired service, then executes the steps of the script accordingly. The script may
vary in size and types of operations involved, and hence the workload varies largely from one
type of service to another (by one or two orders of magnitude). Based on experience and
intuition, the designers decided from the beginning to allow for multiple replications of the
RequestHandler process in order to speed up the system. Two shared objects, ShMem1 and
ShMem2, are used by the multiple RequestHandler replications. The system was intended to be
run either on a single-processor or on a multi-processor with shared memory. Processor
scheduling is such that any process can run on any free processor (i.e., the processors were not
dedicated to specific tasks). Therefore, the processor node was modelled as a multi-server. By
Dummy
Proc
read
IOin
Request
Handler
IOout
DataBase
Proc
IOexec
StackIn
StackOut
Buffer ShMem1
l
update
Figure
11. LQN base case model of the telecommunication system
applying systematically the transformation rules described in the previous section to the
architectural patterns/collaborations used in the system, as shown in Fig. 10, the LQN model
shown in Fig.11 was obtained.
The next step was to determine the LQN model parameters (average service time for each entry,
and average number of visits for each request arc) and to validate the model. We have made use
of measurements using Quantify [19] and the Unix utility top. The measurements with Quantify
were obtained at very low arrival rates of around a couple of requests/second. Quantify is a
profiling tool which uses data from the compiler and run-time information to determine the user
and kernel execution times for test cases chosen by the user. Since we wanted to measure
average execution times for different software components on behalf of a system request (see
Table
1 in the Appendix), we have measured the execution of 2000 requests repeated in a loop,
then computed the average per request. Although we have not computed confidence intervals on
the measurements, repeated experiments were in close agreement. The top utility provided us
with utilization figures for very high loads of hundreds of requests/second, close to the actual
Execution time demands per system request
StackIn
StackOut
IOin
IOout
RequestHandler
DataBase
Execution time (msec)
Non-critical section
Critical sect:Buffer
Critical sect:ShMem1
Critical sect:ShMem2
Total
Figure
12. Distribution of the total demand for CPU time per request
over different software components
operating point. These measurements were done on a prototype in the lab for two different
hardware configurations, with one and four processors. Again, repeated measurements were in
close agreement. We have used the execution times measured with Quantify (given in Table 1 in
the Appendix) to determine the model parameters, and the utilization results from top to validate
our model. The utilization values obtained by solving the model were within 5% of the measured
values. Unfortunately, a more rigorous validation was hindered by the lack of response time
measurements.
6. Performance Analysis of the Telecommunication System
Although the LQN toolset [4] offers both analytical and simulation solvers, the model results
used in this section were obtained by simulation. The reason is that one of the system features,
namely the scheduling policy by polling used for the RequestHandler multi-server, could not be
handled by the analytical solver. All the simulation results were obtained with a confidence
interval of plus/minus1% at the 95% level.
Maximum Throughput Vs. replication factor of the RequestHandler (RH)2006001000n=1 processor n=4 processors n=6 processors
Configurations
Throughput
Figure
13. Maximum achievable throughput for different hardware
and software configurations and a single class of service requests
The first question of interest to developers was to find the "best" hardware and software
configuration that can achieve the desired throughput for a given mix of services. By
"configuration" we understand more specifically the number of processors in the multiprocessor
system and the number of RequestHandler software replications. We tried to answer this
question by exploring a range of configurations for a given service mix, determining for each the
highest achievable throughput (as in Fig. 13). Then the configurations with a maximum
throughput lower then the required values are discarded. The cheapest configuration that can
insure satisfactory throughput and response time at an operating point below saturation will be
chosen. Solving the LQN model is a more efficient way to explore different configurations under
a wide range of workloads then to measure the real system for all these cases.
Although we have modelled the system for two classes of services, we selected to report here
only results for a single class because they illustrate more clearly how the bottleneck moves
around from hardware to software for different configurations. The model was analyzed for three
hardware configurations: with one, four and six processors, respectively. We chose one and four
processors since the actual system had been run for such configurations, and six processors to see
how the software architecture scales up.
When running the system on a single processor (see Fig. 14), the replication of the
RequestHandler does not increase the maximum achievable throughput. This is due to the fact
that the processor is the system bottleneck. As known from [8], the replication of software
processes brings performance rewards only if there is unused processing capacity (which is not
the case here).
A software server is said to be "utilized" when is doing effective work and when is waiting to be
served by lower level servers (including the queueing for the included services). Fig. 17 and
show different contributions to the
utilization of two tasks, IOout and
RH, when the system is saturated (i.e.,
works at the highest achievable
throughput given in Fig. 13) for
different numbers of RH replications.
It is easy to see that for more than five
RH copies, one task (i.e., IOout) has a
very high utilization even though it
does little useful work, whereas at the
same time another task (i.e., a RH
copy) has a lower utilization and does
more useful work.
Interestingly enough, in the case of 4-
processor configuration we notice that
with more processing capacity in the
system, the processors do not reach
the maximum utilization level, as
shown in Fig.15. Instead, two
software tasks IOout and IOin (which
are actually responsible for little
useful work on behalf of a system
reach critical levels of utilization due to serialization constraints in the software
4-Processor Configuration: Base Case0.20.61
Number of RequestHandler replications
Utilization
IOout
Processor
IOExec
Database
RequestHandler
Figure
15. Task Utilizations for 4-proc base case
6-Processor Configuration: Base Case0.20.61
Number of RequestHandler Replications
Utilization
IOout
IOexec Processor
DataBase
RequestHandler
Figure
16. Task utilizations for 6-proc base case
1-Processor Configuration: Base Case0.20.61
Number of RequestHandlers
Utilization
RequestHandler
Processor
IOExec
Database
Figure
14. Task utilizations for 1-proc base case
architecture. There are two reasons
for serialization: i) IOin and IOout
are executed by a single thread of
control (which in LQN means
waiting for task IOexec), and ii)
contend for the same buffer. Thus,
with increasing processing power,
the system bottleneck is moving
from hardware to software. This
trend is more visible in the case of 6-
processors configuration, where the
processor utilization reaches only
86.6%, as shown in Fig.16. The
bottleneck has definitely shifted
from hardware to software, where
the limitations in performance are due to constraints in the software architecture.
We tried to eliminate the serialization constraints in two steps: first by making each filter a
process on its own (i.e., by removing StackExec and IOexec tasks in the LQN model), then by
splitting the pipeline buffer sitting between IO process and RequestHandler in two buffers. The
LQN model obtained after the 2-step modifications is shown in Fig. 19. The results of the "half-
way" modified system (after the first step only) are given in Fig. 20, and show no major
performance improvement (the processor is still used below capacity). By examining again the
utilization components of IOin and IOout (which are still the bottleneck) we found that they
4-Proc Base Case: Contributions to IOout utilization0.20.613 4 5 6 7
Number of RequestHandler replications
Utilization IOout waiting for IOexec
IOexec waiting for semaphore processor
Useful work: read from buffer
Figure
17. Contributions to IOout utilization when the system
is saturated, in function of the RH replication level
Contributions to RequestHandler Utilization0.20.613 4 5 6 7
Number of RequestHandler Reprlications
Utilization
Useful work by each RequestHandler
Useful work by
others on behalf
of RequestHandler
RequestHandler is
busy while waiting
for nested services
Figure
18. Contributions to the utilization of a RH copy when
the system is saturated, in function of the RH replication level
wait most of the time to gain access to the Buffer (IOout is waiting about 90% of the time and
IOin 80%). After applying both modification steps, though, the software bottleneck due to
excessive serialization in the pipeline was removed, and the processor utilization went up again,
as shown in Fig. 21 for the 4-processor and in Fig. 22 for the 6-processor configuration.
As expected, the maximum achievable throughput increased as well. The throughput increase
was rather small in the case of 4 processors (only 2.7%), and larger in the case of 6 processors
(of 10.3%), where there was more unused processing power. We also realized that in the case of
6 processors a new software bottleneck has emerged, namely the Database process (which is now
100% utilized). The new bottleneck, caused by a low-level server, has propagated upwards,
saturating all the software processes that are using it (all RequestHandler replications).
The final conclusion of the performance analysis is that different configurations will have
different bottlenecks, and by solving the bottleneck in one configuration, we shift the problem
somewhere else. What performance modelling has to offer is the ability to explore a range of
design alternatives, configurations and workload mixes, and to detect the causes of performance
limitations just by analyzing the model, before proceeding to change the real system.
IOin
Request
Handler
IOout
DataBase
Proc
StackIn
StackOut
l
BufferIn ShMem1
read write
BufferOut
update
Figure
19. LQN model of the modified system
7. Conclusions
This paper contributes toward
bridging the gap between software
architecture and performance
analysis. It proposes a systematic
approach to building performance
models from the high-level software
architecture of a system, by
transforming each architectural
pattern employed in the system into
a performance sub-model. There is
on-going work to formalize the kind
of transformations presented in the
paper from the architecture to the
performance domain by using formal
graph transformations based on the
The paper illustrates the proposed
approach to building LQN models
by applying it to an existing
telecommunication system. The
performance analysis exposes
weaknesses in the original
4-Processors Configuration: half-way modified system0.20.612 4
Number of RequestHandler replications
Utilization
IOout / IOin
Processor
DataBase
RequestHandler
StackOut StackIn
Figure
20. Task utilizations for the 4-proc half-way modified
system
4-processor configuration: fully modified system0.20.612 4
Number of RH replications
Utilization
Processor
DataBase
RequestHandler
IOout IOin
StackIn
StackOut
Figure
21. Task utilizations for the 4-proc fully modified
system
6-Processor Configuration: fully modified system0.20.61
Number of RequestHandler replications
Utilization
RequestHand ler
Processor
DataBase
IOout
StackOut
IO in
StackIn
Figure
22. Task utilization for the 6-proc fully modified
architecture due to excessive serialization, which show up when more processing power is added
to the system. Surprisingly, software components that do relatively little work on behalf of a
system request can become the bottleneck in certain cases, whereas components that do most of
the work do not. After removing the serialization constraints, a new software bottleneck emerges,
which leads to the conclusion that the software architecture as it is does not scale up well. This
case study illustrates the usefulness of applying performance modelling and analysis to software
architectures.
--R
"A Formal Basis for architectural Connection"
The Unified Modeling Language User Guide
A System of Patterns
"A toolset for Performance Engineering and Software Design of Client-Server Systems"
"Performance Analysis of Distributed Server Systems"
"Performance of Multi-Level Client-Server Systems with Parallel Service Operations"
"Measuremnt Tool and Modelling Techniques for Evaluating Web Server Performance"
"Software bottlenecking in client-server systems and rendezvous networks"
"Deriving Software Performance Models from architectural Patterns by Graph Transformations"
"A Multi-Layer Client-Server Queueing Network Model with Synchronous and Asynchronous Messages"
"The Method of Layers"
Perspectives on an Emerging Discipline
"Some Patterns for Software Architecture"
Performance Engineering of Software Systems
"Architecture-Based Performance Analysis"
"Performance Evaluation of
"Throughput Calculation for Basic Stochastic Rendezvous Networks"
"The Stochastic Rendezvous Network Model for Performance of Synchronous Client-Server-like Distributed Software"
--TR
--CTR
performance model for a BPI middleware, Proceedings of the 4th ACM conference on Electronic commerce, June 09-12, 2003, San Diego, CA, USA
Software Performance Engineering of a Web service-based Clinical Decision Support infrastructure, ACM SIGSOFT Software Engineering Notes, v.29 n.1, January 2004
Gordon P. Gu , Dorina C. Petriu, XSLT transformation from UML models to LQN performance models, Proceedings of the 3rd international workshop on Software and performance, July 24-26, 2002, Rome, Italy
Edward A. Billard, Operating system scenarios as Use Case Maps, ACM SIGSOFT Software Engineering Notes, v.29 n.1, January 2004
Christoph Lindemann , Axel Thmmler , Alexander Klemm , Marco Lohmann , Oliver P. Waldhorst, Performance analysis of time-enhanced UML diagrams based on stochastic processes, Proceedings of the 3rd international workshop on Software and performance, July 24-26, 2002, Rome, Italy
Giovanni Denaro , Andrea Polini , Wolfgang Emmerich, Early performance testing of distributed software applications, ACM SIGSOFT Software Engineering Notes, v.29 n.1, January 2004
Vincenzo Grassi , Raffaela Mirandola, Derivation of Markov Models for Effectiveness Analysis of Adaptable Software Architectures for Mobile Computing, IEEE Transactions on Mobile Computing, v.2 n.2, p.114-131, January
Vincenzo Grassi , Raffaela Mirandola, PRIMAmob-UML: a methodology for performance analysis of mobile software architectures, Proceedings of the 3rd international workshop on Software and performance, July 24-26, 2002, Rome, Italy
Vittorio Cortellessa , Katerina Goseva-Popstojanova , Kalaivani Appukkutty , Ajith R. Guedem , Ahmed Hassan , Rania Elnaggar , Walid Abdelmoez , Hany H. Ammar, Model-Based Performance Risk Analysis, IEEE Transactions on Software Engineering, v.31 n.1, p.3-20, January 2005
Johan Moe , David A. Carr, Using execution trace data to improve distribute systems, SoftwarePractice & Experience, v.32 n.9, p.889-906, July 2002
Vibhu Saujanya Sharma , Kishor S. Trivedi, Quantifying software performance, reliability and security: An architecture-based approach, Journal of Systems and Software, v.80 n.4, p.493-509, April, 2007
Dorin Bogdan Petriu , Murray Woodside, Software performance models from system scenarios, Performance Evaluation, v.61 n.1, p.65-89, June 2005
Vittorio Cortellessa , Raffaela Mirandola, PRIMA-UML: a performance validation incremental methodology on early UML diagrams, Science of Computer Programming, v.44 n.1, p.101-129, July 2002 | layered queueing networks;Unified Modeling Language UML;software architecture;software performance analysis;architectural patterns |
633045 | Scalable application layer multicast. | We describe a new scalable application-layer multicast protocol, specifically designed for low-bandwidth, data streaming applications with large receiver sets. Our scheme is based upon a hierarchical clustering of the application-layer multicast peers and can support a number of different data delivery trees with desirable properties.We present extensive simulations of both our protocol and the Narada application-layer multicast protocol over Internet-like topologies. Our results show that for groups of size 32 or more, our protocol has lower link stress (by about 25%), improved or similar end-to-end latencies and similar failure recovery properties. More importantly, it is able to achieve these results by using orders of magnitude lower control traffic.Finally, we present results from our wide-area testbed in which we experimented with 32-100 member groups distributed over 8 different sites. In our experiments, average group members established and maintained low-latency paths and incurred a maximum packet loss rate of less than 1% as members randomly joined and left the multicast group. The average control overhead during our experiments was less than 1 Kbps for groups of size 100. | INTRODUCTION
Multicasting is an efficient mechanism for packet delivery in one-many
data transfer applications. It eliminates redundant packet replication
in the network. It also decouples the size of the receiver set
from the amount of state kept at any single node and therefore, is
an useful primitive to scale multi-party applications. However, deployment
of network-layer multicast [11] has not beenwidely adopted
by most commercial ISPs, and thus large parts of the Internet are
still incapable of native multicast more than a decade after the protocols
were developed. Application-Layer Multicast protocols [10,
12, 7, 14, 15, 24, 18] do not change the network infrastructure, instead
they implement multicast forwarding functionality exclusively
at end-hosts. Such application-layer multicast protocols and are increasingly
being used to implement efficient commercial content-
distribution networks.
In this paper, we present a new application-layer multicast protocol
which has been developed in the context of the NICE project at
the University of Maryland 1 . NICE is a recursive acronym which
stands for NICE is the Internet Cooperative Environment. In this
paper, we refer to the NICE application-layer multicast protocol as
simply the NICE protocol. This protocol is designed to support applications
with large receiver sets. Such applications include news
and sports ticker services suchas Infogate (http://www.infogate.com)
andESPN Bottomline (http://www.espn.com); real-time stock quotes
and updates, e.g. the Yahoo! Market tracker, and popular Internet
Radio sites. All of these applications are characterized by very
large (potentially tens of thousands) receiver sets and relatively low
bandwidth soft real-time data streams that can withstand some loss.
We refer to this class of large receiver set, low bandwidth real-time
data applications as data stream applications. Data stream applications
present an unique challenge for application-layer multicast
protocols: the large receiver sets usually increase the control overhead
while the relatively low-bandwidth data makes amortizing this
control overhead difficult. NICE can be used to implement very
large data stream applications since it has a provably small (con-
stant) control overhead and produces low latency distribution trees.
It is possible to implement high-bandwidth applications using NICE
as well; however, in this paper, we concentrate exclusively on low
bandwidth data streams with large receiver sets.
1.1 Application-Layer Multicast
The basic idea of application-layer multicast is shown in Figure 1.
Unlike native multicast where data packets are replicated at routers
inside the network, in application-layer multicast data packets are
replicated at end hosts. Logically, the end-hosts form an overlay
network, and the goal of application-layer multicast is to construct
and maintain an efficient overlay for data transmission. Since appli-
A
Network Layer Multicast Application Layer Multicast
Figure
1: Network-layer and application layer multicast.
Squarenodesare routers, and circular nodesare end-hosts. The
dotted lines represent peers on the overlay.
cation-layer multicast protocols must send the identical packetsover
the same link, they are less efficient than native multicast. Two intuitive
measures of "goodness"for application layer multicast over-
lays, namely stress and stretch, were defined in [10]). The stress
metric is defined per-link and counts the number of identical packets
sent by a protocol over each underlying link in the network. The
stretch metric is defined per-member and is the ratio of path-length
from the source to the member along the overlay to the length of
the direct unicast path. Consider an application-layer multicast protocol
in which the data source unicasts the data to each receiver.
Clearly, this "multi-unicast" protocol minimizes stretch, but doesso
at a cost of O(N) stress at links near the source (N is the number
of group members). It also requires O(N)control overhead at some
single point. However, this protocol is robust in the sense that any
number of group member failures do not affect the other members
in the group.
In general, application-layer multicast protocols can be evaluated
along three dimensions:
ffl Quality of the data delivery path: The quality of the tree is
measured using metrics such as stress, stretch, and node degrees
ffl Robustnessof the overlay: Since end-hosts are potentially less
stable than routers, it is important for application-layer multicast
protocols to mitigate the effect of receiver failures. The
robustnessof application-layer multicast protocols is measured
by quantifying the extent of the disruption in data delivery
when different members fail, and the time it takes for the protocol
to restore delivery to the other members. We present the
first comparison of this aspect of application-layer multicast
protocols.
ffl Control overhead: For efficient use of network resources, the
control overhead at the members should be low. This is an
important cost metric to study the scalability of the scheme
to large member groups.
1.2 NICE Trees
Our goals for NICE were to develop an efficient, scalable, and
distributed tree-building protocol which did not require any underlying
topology information. Specifically, the NICE protocol reduces
the worst-case state and control overheadat any member to O(log N),
maintains a constant degree bound for the group members and approach
the O(log N) stretch bound possible with a topology-aware
centralized algorithm. Additionally, we also show that an average
member maintains state for a constant number of other members,
and incurs constant control overheadfor topology creation andmain-
tenance.
In the NICE application-layer multicast scheme, we create a hier-
archically-connectedcontrol topology. The data delivery path is implicitly
defined in the way the hierarchy is structured and no additional
route computations are required.
Along with the analysis of the various bounds, we also present a
simulation-based performance evaluation of NICE. In our simula-
tions, we compare NICE to the Narada application-layer multicast
protocol [10]. Narada was first proposed as an efficient application-layer
multicast protocol for small group sizes. Extensions to it have
subsequently been proposed [9] to tailor its applicability to high-bandwidth
media-streaming applications for these groups, and have
been studied using both simulations and implementation. Lastly, we
present results from a wide-area implementation in which we quantify
the NICE run-time overheads and convergence properties for
various group sizes.
1.3 Roadmap
The rest of the paper is structured as follows: In Section 2, we
describe our general approach, explain how different delivery trees
are built over NICE and present theoretical bounds about the NICE
protocol. In Section 3, we present the operational details of the pro-
tocol. We present our performance evaluation methodology in Section
4, and present detailed analysis of the NICE protocol through
simulations in Section 5 and a wide-area implementation in Section
6. We elaborate on related work in Section 7, and conclude in
Section 8.
2. SOLUTION OVERVIEW
The NICE protocol arranges the set of end hosts into a hierarchy;
the basic operation of the protocol is to create and maintain the hi-
erarchy. The hierarchy implicitly defines the multicast overlay data
paths, as described later in this section. The member hierarchy is
crucial for scalability, since most members are in the bottom of the
hierarchy and only maintain state about a constant number of other
members. The members at the very top of the hierarchy maintain
(soft) state about O(log N) other members. Logically, each member
keeps detailed state about other members that are near in the
hierarchy, and only has limited knowledge about other members in
the group. The hierarchical structure is also important for localizing
the effect of member failures.
The NICE hierarchy describedin this paper is similar to the member
hierarchy used in [3] for scalable multicast group re-keying. How-
ever, the hierarchy in [3], is layered over a multicast-capable net-work
and is constructed using network multicast services (e.g. scoped
expanding ring searches). We build the necessary hierarchy on a
unicast infrastructure to provide a multicast-capable network.
In this paper, we use end-to-end latency as the distance metric
between hosts. While constructing the NICE hierarchy, members
that are "close" with respect to the distance metric are mapped to
the same part of the hierarchy: this allows us to produce trees with
low stretch.
In the rest of this section, we describe how the NICE hierarchy
is defined, what invariants it must maintain, and describe how it is
used to establish scalable control and data paths.
2.1 Hierarchical Arrangement of Members
The NICE hierarchy is created by assigning members to different
levels (or layers) as illustrated in Figure 2. Layers are numbered
sequentially with the lowest layer of the hierarchy being layer zero
(denoted by L0 ). Hosts in each layer are partitioned into a set of
clusters. Each cluster is of size between k and 3k \Gamma 1, where k is a
constant, and consists of a set of hosts that are close to each other.
We explain our choice of the cluster size bounds later in this paper
(Section 3.2.1). Further, each cluster has a cluster leader. The protocol
distributedly chooses the (graph-theoretic) center of the clus-
Figure
3: Control and data delivery paths for a two-layer hierarchy. All A i hosts are members of only L0 clusters. All B i hosts are
members of both layers L0 and L1 . The only C host is the leader of the L1 cluster comprising of itself and all the B hosts.
A
Cluster-leaders of
Cluster-leaders of
layer 0 form layer 1
Topological clusters
joined to layer 0
All hosts are
G
Figure
2: Hierarchical arrangement of hosts in NICE. The layers
are logical entities overlaid on the same underlying physical
network.
ter to be its leader, i.e. the cluster leader has the minimum maximum
distance to all other hosts in the cluster. This choice of the
cluster leader is important in guaranteeing that a new joining member
is quickly able to find its appropriate position in the hierarchy
using a very small number of queries to other members.
Hosts are mapped to layers using the following scheme: All hosts
are part of the lowest layer, L0 . The clustering protocol at L0 partitions
these hosts into a set of clusters. The cluster leaders of all the
clusters in layer L i join layer L i+1 . This is shown with an example
in
Figure
2, using 3. The layer L0 clusters are [ABCD],
[EFGH] and [JKLM] 2 . In this example, we assume that C , F and
M are the centers of their respective clusters of their L0 clusters,
and are chosen to be the leaders. They form layer L1 and are clustered
to create the single cluster, [CFM], in layer L1 . F is the center
of this cluster, and hence its leader. Therefore F belongs to layer L2
as well.
TheNICE clusters and layers are created using a distributed algorithm
described in the next section. The following properties hold
for the distribution of hosts in the different layers:
ffl A host belongs to only a single cluster at any layer.
ffl If a host is present in some cluster in layer L i , it must occur
in one cluster in each of the layers, In fact, it
is the cluster-leader in each of these lower layers.
ffl If a host is not present in layer, L i , it cannot be present in any
layer
ffl Each cluster has its size bounded between k and 3k \Gamma 1. The
leader is the graph-theoretic center of the cluster.
ffl There are at most log k N layers, and the highest layer has
only a single member.
We denote a cluster comprising of hosts
We also define the term super-cluster for any host, X . Assume
that host, X , belongs to layers no other layer,
and let [.XYZ.] be the cluster it belongs it in its highest layer (i.e.
layer its leader in that cluster. Then, the super-cluster
of X is defined as the cluster, in the next higher layer (i.e. L i ), to
which its leader Y belongs. It follows that there is only one supercluster
defined for every host (except the host that belongs to the
top-most layer, which does not have a super-cluster), and the supercluster
is in the layer immediately above the highest layer that H
belongs to. For example, in Figure 2, cluster [CFM] in Layer1 is the
super-cluster for hosts B, A, and D. In NICE each host maintains
state about all the clusters it belongs to (one in each layer to which
it belongs) and about its super-cluster.
2.2 Control and Data Paths
The host hierarchy can be used to define different overlay structures
for control messages and data delivery paths. The neighbors
on the control topology exchange periodic soft state refreshes and
do not generate high volumes of traffic. Clearly, it is useful to have
a structure with higher connectivity for the control messages, since
this will cause the protocol to converge quicker.
In
Figure
3, we illustrate the choices of control and data paths using
clusters of size 4. The edges in the figure indicate the peerings
between group members on the overlay topology. Each set of four
hosts arranged in a 4-clique in Panel 0 are the clusters in layer L0 .
Hosts and C0 are the cluster leaders of these four L0
clusters andform the single cluster in layer L1 . Host C0 is the leader
of this cluster in layer L1 . In the rest of the paper, we use
to denote the cluster in layer L j to which member X belongs. It is
defined if and only if X belongs to layer L j .
The control topology for the NICE protocol is illustrated in Figure
3, Panel 0. Consider a member, X , that belongs only to layers
Its peers on the control topology are the other members
of the clusters to which X belongs in each of these layers, i.e. members
of clusters (X). Using the example(Figure 3,
Panel 0), member A0 belongs to only layer L0 , and therefore, its
control path peers are the other members in its L0 cluster, i.e.
and B0 . In contrast, member B0 belongs to layers L0 and L1 and
therefore, its control path peers are all the other members of its L0
cluster (i.e. A0 ; A1 and A2 ) and L1 cluster (i.e. B1 ; B2 and C0 ).
In this control topology, each member of a cluster, therefore, exchanges
soft state refreshes with all the remaining members of the
cluster. This allows all cluster members to quickly identify changes
in the cluster membership, and in turn, enables faster restoration of
a set of desirable invariants (described in Section 2.4), which might
be violated by these changes.
The delivery path for multicast data distribution needs to be loop-
free, otherwise, duplicate packet detection and suppression mecha-
clusters
for j in
end for
Figure
4: Data forwarding operation at a host, h, that itself received
the data from host p.
nisms need to be implemented. Therefore, in the NICE protocol we
choose the data delivery path to be a tree. More specifically, given
a data source, the data delivery path is a source-specific tree, and
is implicitly defined from the control topology. Each member executes
an instance of the Procedure MulticastDataForward given
in
Figure
4, to decide the set of members to which it needs to forward
the data. Panels 1, 2 and 3 of Figure 3 illustrate the consequent
source-specific trees when the sources are at members A0 ; A7 and
C0 respectively. We call this the basic data path.
To summarize, in each cluster of each layer, the control topology
is a clique, and the data topology is a star. It is possible to choose
other structures, e.g. in each cluster, a ring for control path, and a
balanced binary tree for data path.
2.3 Analysis
Each cluster in the hierarchy has between k and 3k \Gamma 1 members.
Then for the control topology, a host that belongs only to layer L0
peers with O(k) other hosts for exchange of control messages. In
general, a host that belongs to layer L i and no other higher layer,
peers with O(k) other hosts in eachof the layers
fore, the control overhead for this member is O(k i). Hence, the
cluster-leader of the highest layer cluster (Host C0 in Figure 3), peers
with a total of O(k log N) neighbors. This is the worst case control
overhead at a member.
It follows using amortized cost analysis that the control overhead
at an average member is a constant. The number of members that
occur in layer L i and no other higher layer is boundedby O(N=k i ).
Therefore, the amortized control overhead at an average member is
log N
O( log N
with asymptotically increasing N . Thus, the control overhead is
O(k) for the average member, and O(k log N) in the worst case.
The same holds analogously for stress at members on the basic data
path 3 . Also, the number of application-level hops on the basic data
path between any pair of members is O(log N).
While an O(k log N) peers on the data path is an acceptableupper-
bound, we have definedenhancementsthat further reduce the upper-bound
of the number of peers of a member to a constant. The stress
at eachmember on this enhanceddata path (created using local transformations
of the basic data path) is thus reducedto a constant, while
the number of application-level hops between any pair of members
still remain bounded by O(log N). We outline this enhancement to
the basic data path in [4].
2.4 Invariants
All the properties described in the analysis hold as long as the hierarchy
is maintained. Thus, the objective of NICE protocol is to
3 Note that the stress metric at members is equivalent to the degree
of the members on the data delivery tree.
scalably maintain the host hierarchy as new members join and existing
members depart. Specifically the protocol described in the next
section maintains the following set of invariants:
ffl At every layer, hosts are partitioned into clusters of size between
k and 3k \Gamma 1.
ffl All hosts belong to an L0 cluster, and each host belongs to
only a single cluster at any layer
ffl The cluster leaders are the centers of their respective clusters
and form the immediate higher layer.
3. PROTOCOL DESCRIPTION
In this section we describe the NICE protocol using a high-level
description. Detailed description of the protocol (including packet
formats and pseudocode) can be found in [4].
We assume the existence of a special host that all members know
of a-priori. Using nomenclature developed in [10], we call this host
the RendezvousPoint (RP). Each host that intends to join the application-layer
multicast group contacts the RP to initiate the join pro-
cess. For ease of exposition, we assume that the RP is always the
leader of the single cluster in the highest layer of the hierarchy. It interacts
with other cluster members in this layer on the control path,
and is bypassedon the data path. (Clearly, it is possible for the RP to
not be part of the hierarchy, and for the leader of the highest layer
cluster to maintain a connection to the RP, but we do not belabor
that complexity further). For an application such as streaming media
delivery, the RP could be a distinguished host in the domain of
the data source.
The NICE protocol itself has three main components: initial cluster
assignment as a new host joins, periodic cluster maintenanceand
refinement, and recovery from leader failures. We discuss these in
turn.
3.1 New Host Joins
When a new host joins the multicast group, it must be mapped
to some cluster in layer L0 . We illustrate the join procedure in Figure
5. Assume that host A12 wants to join the multicast group. First,
it contacts the RP with its join query (Panel 0). The RP responds
with the hosts that are present in the highest layer of the hierarchy.
The joining host then contacts all members in the highest layer (Panel
1) to identify the member closest to itself. In the example, the highest
layer L2 has just onemember, C0 , which by default is the closest
member to A12 amongst layer L2 members. Host C0 informs A12
of the three other members (B0 ; B1 and B2 ) in its L1 cluster. A12
then contacts each of these members with the join query to identify
the closest member among them (Panel 2), and iteratively uses this
procedure to find its L0 cluster.
It is important to note that any host, H , which belongs to any
layer L i is the center of its L i\Gamma1 cluster, and recursively, is an approximation
of the center among all members in all L0 clusters that
are below this part of the layered hierarchy. Hence, querying each
layer in succession from the top of the hierarchy to layer L0 results
in a progressive refinement by the joining host to find the most appropriate
layer L0 cluster to join that is close to the joining member.
The outline of this operation are presented in pseudocode as Procedure
BasicJoinLayer in Figure 6.
We assume that all hosts are aware of only a single well-known
host, the RP, from which they initiate the join process. Therefore,
overheads due to join query-response messages is highest at the RP
and descreasesdown the layers of the hierarchy. Under a very rapid
sequenceof joins, the RP will need to handlea large number of such
join query-responsemessages. Alternate andmore scalable join schemes
Join L0
L2:{ C0 } Join L0
{ B0,B1,B2 }
Attach
Figure
5: Host A12 joins the multicast group.
while (j ? i)
Find y s.t. dist(h; y) - dist(h; x); x; y 2
Decrement j,
endwhile
Join cluster Cl j
Figure
Basic join operation for member h, to join layer L i .
new member. If i ? 0, then h is already part of
layer L seeks the membership information
of Cl j\Gamma1 (y) from member y. Query(RP; \Gamma) seeks the membership
information of the topmost layer of the hierarchy, from the
RP .
are possible if we assume that the joining host is aware of some
other "nearby"host that is already joined to the overlay. In fact, both
Pastry [19] and Tapestry [23] alleviate a potential bottleneck at the
RP for a rapid sequence of joins, based on such an assumption.
3.1.1 Join Latency
The joining process involves a message overhead of O(k log N)
query-response pairs. The join-latency depends on the delays incurred
in this exchanges, which is typically about O(log N) round-trip
times. In our protocol, we aggressively locate possible "good"
peers for a joining member, and the overhead for locating the appropriate
attachments for any joining member is relatively large.
To reducethe delay betweena member joining the multicast group,
and its receipt of the first data packet on the overlay, we allow joining
members to temporarily peer, on the data path, with the leader
of the cluster of the current layer it is querying. For example, in
Figure
5, when A12 is querying the hosts B0 ; B1 and B2 for the
closest point of attachment, it temporarily peers with C0 (leader of
the layer L1 cluster) on the data path. This allows the joining host to
start receiving multicast data on the group within a single round-trip
latency of its join.
3.1.2 Joining Higher Layers
An important invariant in the hierarchical arrangement of hosts
is that the leader of a cluster be the center of the cluster. Therefore,
as members join and leave clusters, the cluster-leader may occasionally
change. Considera changein leadership of a cluster, C , in layer
. The current leader of C removes itself from all layers L j+1 and
higher to which it is attached. A new leader is chosen for each of
these affected clusters. For example, a new leader, h, of C in layer
L j is chosen which is now required to join its nearest L j+1 cluster.
This is its current super-cluster (which by definition is the cluster in
layer L j+1 to which the outgoing leader of C was joined to), i.e. the
new leader replaces the outgoing leader in the super-cluster. How-
ever, if the super-cluster information is stale and currently invalid,
then the new leader, h, invokes the join procedure to join the nearest
L j+1 cluster. It calls BasicJoinLayer(h;j and the routine
terminates when the appropriate layer L j+1 cluster is found. Also
note that the BasicJoinLayer requires interaction of the member h
with the RP. The RP, therefore, aids in repairing the hierarchy from
occasional overlay partitions, i.e. if the entire super-cluster information
becomes stale in between the periodic HeartBeat messages
that are exchanged between cluster members. If the RP fails, for
correct operation of our protocol, we require that it be capable of
recovery within a reasonable amount of time.
3.2 Cluster Maintenance and Refinement
Each member H of a cluster C , sends a HeartBeat message every
h seconds to each of its cluster peers (neighbors on the control
topology). The messagecontains the distance estimate of H to each
other member of C . It is possible for H to have inaccurate or no
estimate of the distance to some other members, e.g. immediately
after it joins the cluster.
The cluster-leader includes the complete updated cluster membership
in its HeartBeat messagesto all other members. This allows
existing members to set up appropriate peer relationships with new
cluster members on the control path. For each cluster in level L i ,
the cluster-leader also periodically sends the its immediate higher
layer cluster membership(which is the super-cluster for all the other
members of the cluster) to that L i cluster.
All of the cluster member state is sent via unreliable messages
and is kept by each cluster member as soft-state, refreshed by the
periodic HeartBeat messages. A member H is declared no longer
part of a cluster independently by all other members in the cluster
if they do not receive a message from H for a configurable number
of HeartBeat message intervals.
3.2.1 Cluster Split and Merge
A cluster-leader periodically checks the size of its cluster, and appropriately
splits or merges the cluster when it detects a size bound
violation. A cluster that just exceeds the cluster size upper bound,
into two equal-sized clusters.
For correct operation of the protocol, we could have chosen the
cluster size upper bound to be any value - 2k \Gamma 1. However, if
waschosenas the upperbound, then the cluster would require
to split when it exceeds this upper bound (i.e. reaches the size 2k).
Subsequently, an equal-sized split would create two clusters of size
k each. However, a single departure from any of these new clusters
would violate the size lower bound and require a cluster merge operation
to be performed. Choosing a larger upper bound (e.g. 3k-1)
avoids this problem. When the cluster exceeds this upper bound, it
is split into two clusters of size at least 3k=2, and therefore, requires
at least k=2 member departures before a merge operation needs to
be invoked.
The cluster leader initiates this cluster split operation. Given a set
of hosts and the pairwise distances between them, the cluster split
operation partitions them into subsetsthat meet the size bounds,such
that the maximum radius (in a graph-theoretic sense) of the new set
of clusters is minimized. This is similar to the K-center problem
(known to be NP-Hard) but with an additional size constraint. We
use an approximation strategy - the leader splits the current cluster
into two equal-sized clusters, such that the maximum of the radii
among the two clusters is minimized. It also chooses the centers of
the two partitions to be the leaders of the new clusters and transfers
leadership to the new leaders through LeaderTransfer messages. If
these new clusters still violate the size upper bound, they are split
by the new leaders using identical operations.
If the size of a cluster, Cl i (J) (in layer
below k, the leader J , initiates a cluster merge operation. Note, J
itself belongs to a layer L i+1 cluster, Cl i+1 (J). J chooses its closest
cluster-peer, K , in Cl i+1(J) . K is also the leader of a layer L i
cluster, initiates the merge operation of C i with
by sending a ClusterMergeRequest message to K . J updates the
members of Cl i (J) with this merge information. K similarly updates
the members of Cl i (K). Following the merge, J removes itself
from layer L i+1 (i.e. from cluster Cl i+1 (J).
3.2.2 Refining Cluster Attachments
When a member is joining a layer, it may not always be able to
locate the closest cluster in that layer (e.g. due to lost join query
or join response, etc.) and instead attaches to some other cluster in
that layer. Therefore, eachmember, H , in any layer (say L i ) periodically
probes all members in its super-cluster (they are the leaders of
layer L i clusters), to identify the closest member (say J ) to itself in
the super-cluster. If J is not the leader of the L i cluster to which H
belongs then such an inaccurate attachment is detected. In this case,
H leaves its current layer L i cluster and joins the layer L i cluster
of which J is the leader.
3.3 Host Departure and Leader Selection
When a host H leaves the multicast group, it sends a Remove
message to all clusters to which it is joined. This is a graceful-leave.
However, if H fails without being able to send out this message all
cluster peers of H detects this departure through non-receipt of the
periodic HeartBeat message from H . If H was a leader of a clus-
ter, this triggers a new leader selection in the cluster. Each remaining
member, J , of the cluster independently select a new leader of
the cluster, depending on who J estimates to be the center among
these members. Multiple leaders are re-conciled into a single leader
of the cluster through exchange of regular HeartBeat messages using
an appropriate flag (LeaderTransfer) each time two candidate
leaders detect this multiplicity. We present further details of these
operations in [4].
It is possible for members to have an inconsistent view of the
cluster membership, and for transient cycles to develop on the data
path. These cycles are eliminated once the protocol restores the hierarchy
invariants and reconciles the cluster view for all members.
4. EXPERIMENTAL METHODOLOGY
We have analyzed the performance of the NICE protocol using
detailed simulations and a wide-area implementation. In the simulation
environment, we compare the performance of NICE to three
other schemes: multi-unicast, native IP-multicast using the Core Based
Tree protocol [2], and the Narada application-layer multicast
protocol (as given in [10]). In the Internet experiments, we benchmark
the performance metrics against direct unicast paths to the member
hosts.
Clearly, native IP multicast trees will have the least (unit) stress,
since each link forwards only a single copy of each data packet. Unicast
paths have the lowest latency and so we consider them to be of
unit stretch 4 . They provide us a reference against which to compare
the application-layer multicast protocols.
4.1 Data Model
In all these experiments, we model the scenario of a data stream
source multicasting to the group. We chose a single end-host, uniformly
at random, to be the data source generating a constant bit rate
data. Each packet in the data sequence, effectively, samples the data
path on the overlay topology at that time instant, and the entire data
packet sequence captures the evolution of the data path over time.
4.2 Performance Metrics
We compare the performance of the different schemes along the
following dimensions:
ffl Quality of data path: This is measured by three different metrics
- tree degree distribution, stress on links and routers and
stretch of data paths to the group members.
ffl Recovery from host failure: As hosts join and leave the multicast
group, the underlying data delivery path adapts accordingly
to reflect these changes. In our experiments, we modeled
member departures from the group as ungraceful depar-
tures, i.e. members fail instantly and are unable to send appropriate
leave messages to their existing peers on the topol-
ogy. Therefore, in transience, particularly after host failures,
path to some hosts may be unavailable. It is also possible for
multiple paths to exist to a single host and for cycles to develop
temporarily.
To study these effects, we measured the fraction of hosts that
correctly receive the data packets sent from the source as the
group membership changed. We also recorded the number
of duplicates at each host. In all of our simulations, for both
the application-layer multicast protocols, the number of duplicates
was insignificant and zero in most cases.
ffl Control traffic overhead: We report the mean, variance and
the distribution of the control bandwidth overheads at both
routers and end hosts.
5. SIMULATION EXPERIMENTS
We have implemented a packet-level simulator for the four different
protocols. Our network topologies were generated using the
Transit-Stub graphmodel, using the GT-ITM topology generator [5].
All topologies in these simulations had 10; 000 routers with an average
node degree between 3 and 4. End-hosts were attached to a
set of routers, chosen uniformly at random, from among the stub-
domain nodes. The number of such hosts in the multicast group
were varied between 8 and 2048 for different experiments. In our
simulations, we only modeled loss-less links; thus, there is no data
loss due to congestion, and no notion of background traffic or jit-
ter. However, data is lost whenever the application-layer multicast
4 There are some recent studies [20, 1] to show that this may not always
be the case; however, we use the native unicast latency as the
reference to compare the performance of the other schemes.
protocol fails to provide a path from the source to a receiver, and duplicates
are received whenever there is more than one path. Thus,
our simulations study the dynamics of the multicast protocol and its
effects on data distribution; in our implementation, the performance
is also affected by other factors such as additional link latencies due
to congestion and drops due to cross-traffic congestion.
For comparison,we haveimplemented the entire Narada protocol
from the description given in [10]. The Narada protocol is a "mesh-
first" application-layer multicast approach, designed primarily for
small multicast groups. In this approach the members distributedly
construct a mesh which is an overlay topology where multiple paths
exists between pairs of members. Each member participates in a
routing protocol on this overlay mesh topology to generate source-specific
trees that reach all other members. In Narada, the initial set
of peer assignments to create the overlay mesh is done randomly.
While this initial data delivery path may be of "poor" quality, over
time Narada adds "good" links and discards "bad" links from the
overlay. Narada has O(N 2 ) aggregate control overhead because of
its mesh-first nature: it requires each host to periodically exchange
updates and refreshes with all other hosts. The protocol, as defined
in [10], has a number of user-defined parameters that we needed
to set. These include the link add/drop thresholds, link add/drop
probe frequency, the periodic refresh rates, the mesh degree, etc.
We present detailed description of our implementation of the Narada
protocol, including the impact of different choices of parameters,
in [4].
5.1 Simulation Results
We havesimulated a wide-range of topologies, group sizes, member
join-leave patterns, and protocol parameters. For NICE, we set
the cluster size parameter, k, to 3 in all of the experiments presented
here. Broadly, our findings can be summarized as follows:
ffl NICE trees have data paths that have stretch comparable to
Narada.
ffl The stress on links and routers are lower in NICE, especially
as the multicast group size increases.
ffl The failure recovery of both the schemes are comparable.
ffl NICE protocol demonstratesthat it is possibleto provide these
performance with orders of magnitudelower control overhead
for groups of size ? 32.
We begin with results from a representative experiment that captures
all the of different aspects comparing the various protocols.
5.1.1 Simulation Representative Scenario
Thisexperiment hastwo different phases: a join phaseand a leave
phase. In the join phase a set of 128 members 5 join the multicast
group uniformly at random between the simulated time 0 and 200
seconds. These hosts are allowed to stabilize into an appropriate
overlay topology until simulation time 1000 seconds. The leave phase
starts at time 1000 seconds: hosts leave the multicast group over
a short duration of 10 seconds. This is repeated four more times,
at 100 second intervals. The remaining 48 members continue to be
part of the multicast group until the end of simulation. All member
departures are modeled as host failures since they have the most
damaging effect on data paths. We experimented with different numbers
of member departures, from a single member to 16 members
leaving over the ten secondwindow. Sixteen departures from agroup
5 We show results for the 128 member case because that is the group
size used in the experiments reported in [10]; NICE performs increasingly
better with larger group sizes.
of size 128 within a short time window is a drastic scenario, but it
helps illustrate the failure recovery modes of the different protocols
better. Member departures in smaller sizes cause correspondingly
lower disruption on the data paths.
We experimented with different periodic refresh rates for Narada.
For a higher refresh rate the recovery from host failures is quicker,
but at a cost of higher control traffic overhead. For Narada, we used
different values for route update frequencies and periods for probing
other mesh members to add or drop links on the overlay. In our re-
sults, we report results from using route update frequencies of once
every 5 seconds (labeled Narada-5), and once every seconds (la-
beled Narada-30). The second update period corresponds to the
what was used in [10]; we ran with the 5 second update period since
the heartbeat period in NICE was set to 5 seconds. Note that we
could run with a much smaller heartbeat period in NICE without
significantly increasing control overheadsince the control messages
are limited within clusters and do not traverse the entire group. We
also varied the mesh probe period in Narada and observed data path
instability effect discussedabove. In these results, we set the Narada
mesh probe period to 20 seconds.
Data Path Quality
In
Figures
7 and 8, we show the average link stress and the average
path lengths for the different protocols as the data tree evolves
during the member join phase. Note that the figure shows the actual
path lengths to the end-hosts; the stretch is the ratio of average path
length of the members of a protocol to the average path length of
the members in the multi-unicast protocol.
As explained earlier, the join procedure in NICE aggressivelyfinds
good points of attachment for the members in the overlay topology,
and the NICE tree converges quicker to a stable value (within 350
seconds of simulated time). In contrast, the Narada protocols gradually
improve the mesh quality, and consequently so does the data
path over a longer duration. Its average data path length converges
to a stable value of about 23 hops between 500 and 600 seconds
of the simulated time. The corresponding stretch is about 2.18. In
Narada path lengths improve over time due to addition of "good"
links on the mesh. At the same time, the stress on the tree gradually
increases since the Narada decides to add or drop overlay links
based purely on the stretch metric.
The cluster-based data dissemination in NICE reduces average
link stress, and in general, for large groups NICE converges to trees
with about 25% lower average stress. In this experiment, the NICE
tree had lower stretch than the Narada tree; however, in other experiments
the Narada tree had a slightly lower stretch value. In gen-
eral, comparing the results from multiple experimentsover different
group sizes, (See Section 5.1.2), we concluded that the data path
lengths to receivers were similar for both protocols.
In
Figures
9 and 10, we plot a cumulative distribution of the stress
and path length metrics for the entire member set (128 members) at a
time after the data paths have converged to a stable operating point.
The distribution of stress on links for the multi-unicast scheme
has a significantly large tail (e.g. links close to the source has a
stress of 127). This should be contrasted with better stress distribution
for both NICE and Narada. Narada uses fewer number of links
on the topology than NICE, since it is comparably more aggressive
in adding overlay links with shorter lengths to the mesh topology.
However, due to this emphasis on shorter path lengths, the stress
distribution of the links hasa heavier-tail than NICE. More than 25%
of the links have a stress of four and higher in Narada, compared to
5% in NICE. The distribution of the path lengths for the two protocols
are comparable.
1.92.12.3
100 200 300 400 500 600 700 800 900
Average
link
stress
Time (in secs)
128 end-hosts joinJoin Narada-5
Figure
7: Average link stress (simulation)1525100 200 300 400 500 600 700 800 900
Average
receiver
path
length
Time (in secs)
128 end-hosts joinJoin
Narada-5
IP Multicast
Unicast
Figure
8: Average path length
Number
of
links
Link stress
Cumulative distribution of link stress after overlay stabilizes
(Unicast truncated
Extends to stress = 127)
Narada-5
Unicast
Figure
9: Stress distribution
Number
of
hosts
Overlay path length (hops)
Cumulative distribution of data path lengths after overlay stabilizes
Unicast
IP Multicast
Narada-5
Figure
10: Path length distribution (simulation)
Failure Recovery and Control Overheads
To investigate the effect of host failures, we present results from the
second part of our scenario: starting at simulated time 1000 sec-
onds, a set of 16 members leave the group over a 10 second period.
We repeat this procedure four more times and no members leave after
simulated time 1400 seconds when the group is reduced to 48
members. When members leave, both protocols "heal" the data distribution
tree and continue to send data on the partially connected
topology. In Figure 11, we show the fraction of members that correctly
receive the data packets over this duration. Both Narada-5
and NICE have similar performance, and on average, both protocols
restore the data path to all (remaining) receivers within
onds. We also ran the same experiment with the
period for Narada. The lower refresh period caused significant disruptions
on the tree with periods of over 100 seconds when more
than 60% of the tree did not receive any data. Lastly, we note that
the data distribution tree used for NICE is the least connected topology
possible; we expect failure recovery results to be much better
if structures with alternate paths are built atop NICE.
In
Figure
12, we show the byte-overheads for control traffic at
the access links of the end-hosts. Each dot in the plot represents the
sum of the control traffic (in Kbps) sent or received by eachmember
in the group, averaged over 10 second intervals. Thus for each 10
second time slot, there are two dots in the plot for each (remaining)
host in the multicast group corresponding to the control overheads
for Narada and NICE. The curves in the plot are the average control
overhead for each protocol. As can be expected, for groups of
size 128, NICE has an order of magnitude lower average overhead,
e.g. at simulation time 1000 seconds, the average control overhead
for NICE is 0.97 Kbps versus 62.05 Kbps for Narada. At the same
time instant, Narada-30 (not shown in the figure) had an average
control overhead of 13.43 Kbps. Note that the NICE control traffic
includes all protocol messages, including messages for cluster
formation, cluster splits, merges, layer promotions, and leader elections
5.1.2 Aggregate Results
We present a set of aggregate results as the group size is varied.
The purpose of this experiment is to understand the scalability of
the different application-layer multicast protocols. The entire set of
members join in the first 200 seconds, and then we run the simulation
for 1800secondsto allow the topologies to stabilize. In Table 1,
we compare the stress on network routers and links, the overlay path
lengths to group members and the average control traffic overheads
at the network routers. For each metric, we present the both mean
and the standard deviation. Note, that the Narada protocol involves
an aggregate control overhead of O(N 2 ), where N is the size of the
group. Therefore, in our simulation setup, we were unable to simu-
0Fraction
of
hosts
that
correctly
received
data
Time (in secs)
128 end-hosts join followed by periodic leaves in sets of 16
Leave
Narada-5
Figure
11: Fraction of members that receiveddata packetsover
the duration of member failures. (simulation)103050700 200 400 600 800 1000 1200 1400 1600 1800 2000
Control
traffic
bandwidth
(Kbps)
Time (in secs)
Control traffic bandwidth at the access linksJoin
Leave
Narada-5 (Avg)
Figure
12: Control bandwidth required at end-host accesslinks
Group Router Stress Link Stress Path Length Bandwidth Overheads (Kbps)
Size Narada-5 NICE Narada-5 NICE Narada-5 NICE Narada-30 NICE
Table
1: Data path quality and control overheads for varying multicast group sizes (simulation)
late Narada with groups of size 1024 or larger since the completion
time for these simulations were on the order of a day for a single run
of one experiment on a 550 MHz Pentium III machine with 4 GB of
RAM.
NaradaandNICE tend to converge to trees with similar path lengths.
The stress metric for both network links and routers, however, is
consistently lower for NICE when the group size is large (64 and
greater). It is interesting to observe the standard deviation of stress
as it changes with increasing group size for the two protocols. The
standard deviation for stress increased for Narada for increasing group
sizes. In contrast, the standard deviation of stress for NICE remains
relatively constant; the topology-basedclustering in NICE distributes
the data path more evenly among the different links on the underlying
links regardless of group size.
The control overhead numbers in the table are different than the
ones in Figure 12; the column in the table is the average control
traffic per network router as opposed to control traffic at an end-
host. Since the control traffic gets aggregated inside the network,
the overhead at routers is significantly higher than the overhead at
an end-host. For these router overheads, we report the values of the
version in which the route updatefrequency set to
onds. Recall that the Narada-30 version has poor failure recovery
performance, but is much more efficient (specifically 5 times less
overhead with groups of size 128) than the Narada-5 version. The
HeartBeat messages in NICE were still sent at 5 second intervals.
For the NICE protocol, the worst case control overheads at members
increase logarithmically with increase in group size. The control
overheads at routers (shown in Table 1), show a similar trend.
Thus, although we experimented with upto 2048 members in our
simulation study, we believe that our protocol scales to even larger
groups.
6. WIDE-AREA IMPLEMENTATION
We have implemented the complete NICE protocol and experimented
with our implementation over a one-month period, with
to 100 member groups distributed across 8 different sites. Our experimental
topology is shown in Figure 13. The number of members
at each site was varied between 2 and different experi-
ments. For example, for the member experiment reported in this
section, we had 2 members each in sites B, G and H, 4 each at A,
in C and 8 in D. Unfortunately, experiments with much
larger groups were not feasible on our testbed. However, our implementation
results for protocol overheads closely match our simulation
experiments, and we believe our simulations provide a reasonable
indication of how the NICE implementation would behave
with larger group sizes.
6.1 Implementation Specifics
We haveconductedexperiments with data sourcesat different sites.
In this paper, we present a representative set of the experimentswhere
the data stream source is located at site C in Figure 13. In the fig-
ure, we also indicate the typical direct unicast latency (in millisec-
onds) from the site C, to all the other sites. These are estimated one-way
latencies obtained using a sequence of application layer (UDP)
probes. Data streams were sent from the source host at site C, to all
other hosts, using the NICE overlay topology. For our implementa-
GHFEDCBA39.44.60.533.3
Source
A: cs.ucsb.edu
B: asu.edu
C: cs.umd.edu
F: umbc.edu
G: poly.edu
Figure
13: Internet experiment sites and direct unicast latencies
from C
tion, we experimented with different HeartBeat rates; in the results
presented in this section, we set the HeartBeat message period to 10
seconds.
In our implementation, we had to estimate the end-to-end latency
between hosts for various protocol operations, including member
joins, leadership changes, etc. We estimated the latency between
two end-hosts using a low-overhead estimator that sent a sequence
of application-layer (UDP) probes. We controlled the number of
probes adaptively using observed variance in the latency estimates.
Further, instead of using the raw latency estimates as the distance
metric, we used a simple binning scheme to map the raw latencies
to a small set of equivalence classes. Specifically, two latency estimates
were considered equivalent if they mappedto the same equivalence
class, and this resulted in faster convergence of the overlay
topology. The specific latency ranges for each class were 0-1 ms,
1-5 ms, 5-10 ms, 10-20 ms, 20-40 ms, 40-100 ms, 100-200 ms and
greater than 200 ms.
To compute the stretch for end-hosts in the Internet experiments,
we used the ratio of the latency from between the source and a host
along the overlay to the direct unicast latency to that host. In the
wide-area implementation, when a host A receives a data packet
forwarded by memberB along the overlay tree, A immediately sends
back a overlay-hop acknowledgment back to B. B logs the round-trip
latency between its initial transmission of the data packet to A
and the receipt of the acknowledgment from A. After the entire
experiment is done, we calculated the overlay round-trip latencies
for each data packet by adding up the individual overlay-hop latencies
available from the logs at each host. We estimated the one-way
overlay latency as half of this round trip latency. We obtained
the unicast latencies using our low-overhead estimator immediately
after the overlay experiment terminated. This guaranteed that the
measurements of the overlay latencies and the unicast latencies did
not interfere with each other.
6.2 Implementation Scenarios
The Internet experiment scenarios have two phases: a join phase
and a rapid membership change phase. In the join phase, a set of
member hosts randomly join the group from the different sites. The
hosts are then allowed to stabilize into an appropriate overlay de-0.750.850.951
Fraction
of
members
Stress
Cumulative distribution of stress
members
members
members
Figure
14: Stress distribution (testbed)
livery tree. After this period, the rapid membership change phase
starts, where host members randomly join and leave the group. The
average member lifetime in the group, in this phase was set to
seconds. Like in the simulation studies, all member departures are
ungraceful and allow us to study the worst case protocol behavior.
Finally, we let the remaining set of members to organize into a stable
data delivery tree. We present results for three different groups
of size of 32, 64, and 96 members.
Data Path Quality
In
Figure
14, we show the cumulative distribution of the stress metric
at the group members after the overlay stabilizes at the end of
the join phase. For all group sizes, typical members have unit stress
(74% to 83% of the members in these experiments). The stress for
the remaining members vary between 3 and 9. These members are
precisely the cluster leaders in the different layers (recall that the
cluster size lower and upperbounds for these experiments is 3 and 9,
respectively). The stress for these members can be reduced further
by using the high-bandwidth data path enhancements,described in [4].
For larger groups, the number of members with higher stress (i.e.
between 3 and 9 in these experiments) is more, since the number of
clusters (and hence, the number of cluster leaders) is more. How-
ever, as expected, this increase is only logarithmic in the group size.
In
Figure
15, we plot the cumulative distribution of the stretch
metric. Instead of plotting the stretch value for each single host, we
group them by the sites at which there are located. For all the member
hosts at a given site, we plot the mean and the 95% confidence
intervals. Apart from the sites C, D, and E, all the sites have near
unit stretch. However, note that the source of the data streams in
these experiments were located in site C and hosts in the sites C,
D, and E had very low latency paths from the source host. The actual
end-to-end latencies along the overlay paths to all the sites are
shown in Figure 16. For the sites C, D and E these latencies were 3.5
ms, 3.5 ms and 3.0 ms respectively. Therefore, the primary contribution
to these latencies are packet processing and overlay forwarding
on the end-hosts themselves.
In
Table
2, we present the mean and the maximum stretch for
the different members, that had direct unicast latency of at least 2
ms from the source (i.e. sites A, B, G and H), for all the different
sizes. The mean stretch for all these sites are low. However,
in some cases we do see relatively large worst case stretches (e.g.
Stretch
Sites
Distribution of stretch (64 members)
Figure
15: Stretch distribution (testbed)515253545
Overlay
end-to-end
latency
(in
Sites
Distribution of latency (64 members)
Figure
Latency distribution (testbed)
in the 96-member experiment there was one member that for which
the stretch of the overlay path was 4.63).
Failure Recovery
In this section, we describethe effects of groupmembershipchanges
on the data delivery tree. To do this, we observe how successful the
overlay is in delivering data during changesto the overlay topology.
We measured the number of correctly received packets by different
members during the rapid membershipchangephase of
the experiment, which begins after the initial member set has stabilized
into the appropriate overlay topology. This phase lasts for 15
minutes. Members join and leave the grou at random such that the
average lifetime of a member in the group is seconds.
In
Figure
17 we plot over time the fraction of members that successfully
received the different data packets. A total of
membership changes happened over the duration. In Figure 18 we
plot the cumulative distribution of packet losses seen by the different
membersover the entire 15 minute duration. Themaximum number
of packet losses seen by a member was 50 out of 900 (i.e. about
5.6%), and30% of the members did not encounterany packetlosses.
Even under this rapid changes to the group membership, the largest
continuous duration of packet losses for any single host was 34 sec-
onds, while typical members experienced a maximum continuous0.50.70.90 100 200 300 400 500 600 700 800 900
Fraction
of
hosts
that
correctly
receive
data
Time (in secs)
Distribution of losses for packets in random membership change phase
members
Average member lifetime =
Figure
17: Fraction of members that received data packets as
group membership continuously changed (testbed)0.10.30.50.70.90
Fraction
of
members
Fraction of packets lost
Cumulative distribution of losses at members in random membership change phase
members
Average member lifetime =
Figure
Cumulative distribution of fraction of packets lost
for different members out of the entire sequence of 900 packets
during the rapid membership change phase (testbed)
data loss for only two seconds - this was true for all but 4 of the
members. These failure recovery statistics are good enough for use
in most data stream applications deployed over the Internet. Note
that in this experiment, only three individual packets (out of 900)
suffered heavy losses: data packets at times 76 s, 620 s, and 819 s
were not received by 51, 36 and 31 members respectively.
Control Overheads
Finally, we present the control traffic overheads(in Kbps) in Table 2
for the different group sizes. The overheads include control packets
that were sent as well as received. We show the average and maximum
control overhead at any member. We observed that the control
traffic at most members lies between 0.2 Kbps to 2.0 Kbps for the
different group sizes. In fact, about80% of the members require less
than 0.9 Kbps of control traffic for topology management. More in-
terestingly, the average control overheads and the distributions do
not change significantly as the group size is varied. The worst case
control overhead is also fairly low (less than 3 Kbps).
Group Stress Stretch Control overheads (Kbps)
Size Mean Max. Mean Max. Mean Max.
Table
2: Average and maximum values of of the different metrics
for different group sizes(testbed)
7. RELATED WORK
A number of other projects have explored implementing multi-cast
at the application layer. They can be classified into two broad
categories: mesh-first (Narada [10], Gossamer[7]) and tree-first protocols
(Yoid [12], ALMI [15], Host-Multicast [22]). Yoid and Host-
Multicast defines a distributed tree building protocol between the
end-hosts, while ALMI uses a centralized algorithm to create a minimum
spanning tree rooted at a designated single source of multi-cast
data distribution. The Overcast protocol [14] organizes a set of
proxies (called Overcast nodes) into a distribution tree rooted at a
central source for single source multicast. A distributed tree-building
protocol is used to create this source specific tree, in a manner similar
to Yoid. RMX [8] provides support for reliable multicast data
delivery to end-hosts using a set of similar proxies, called Reliable
Multicast proXies. Application end-hosts are configured to affiliate
themselves with the nearest RMX. The architecture assumes the existence
of an overlay construction protocol, using which these proxies
organize themselves into an appropriate data delivery path. TCP
is used to provide reliable communicationbetween each pair of peer
proxies on the overlay.
Some other recent projects (Chord [21], Content AddressableNet-
works (CAN) [17], Tapestry [23] and Pastry [19]) havealso addressed
the scalability issue in creating application layer overlays, and are
therefore, closely related to our work. CAN definesa virtual d-dimensional
Cartesian coordinate space, and each overlay host "owns" a
part of this space. In [18], the authors have leveraged the scalable
structure of CAN to define an application layer multicast scheme, in
which hosts maintain O(d) state and the path lengths are O(dN 1=d )
application level hops, where N is the number of hosts in the net-
work. Pastry [19] is a self-organizing overlay network of nodes,
where logical peer relationships on the overlay are based on matching
prefixes of the node identifiers. Scribe [6] is a large-scale event
notification infrastructure that leverages the Pastry system to create
groups and build efficient application layer multicast paths to the
group members for dissemination of events. Being based on Pas-
try, it has similar overlay properties, namely
at members, and O(log 2 b N) application level hops between members
6 . Bayeux[24] in another architecture for application layer mul-
ticast, where the end-hosts are organized into a hierarchy as defined
by the Tapestry overlay location and routing system [23]. A level
of the hierarchy is defined by a set of hosts that share a common
suffix in their host IDs. Such a technique was proposed by Plaxton
et.al. [16] for locating and routing to named objects in a net-
work. Therefore, hosts in Bayeux maintain O(b log b N) state and
end-to-end overlay paths have O(log b N) application level hops.
As discussed in Section 2.3, our proposed NICE protocol incurs an
amortized O(k) state at members and the end-to-end paths between
members have O(log k N) application level hops. Like Pastry and
Tapestry, NICE also chooses overlay peers based on network locality
which leads to low stretch end-to-end paths.
We summarize the above as follows: For both NICE and CAN-
6 b is a small constant.
multicast, members maintain constant state for other members, and
consequentlyexchangea constantamount of periodic refreshes mes-
sages. This overhead is logarithmic for Scribe and Bayeux. The
overlay paths for NICE, Scribe, andBayeuxhavea logarithmic number
of application level hops, and path lengths in CAN-multicast
asymptotically havea larger number of application level hops. Both
NICE and CAN-multicast use a single well-known host (the RP, in
our nomenclature) to bootstrap the join procedure of members. The
join procedure, therefore, incurs a higher overhead at the RP and
the higher layers of the hierarchy than the lower layers. Scribe and
Bayeux assume members are able find different "nearby" members
on the overlay through out-of-band mechanisms,from which to bootstrap
the join procedure. Using this assumption, the join overheads
for a large number of joining members can be amortized over the
different such "nearby" bootstrap members in these schemes.
8. CONCLUSIONS
In this paper, we have presented a new protocol for application-layer
multicast. Our main contribution is an extremely low overhead
hierarchical control structure over which different data distribution
paths canbe built. Our results show that it is possible to build
and maintain application-layer multicast trees with very little over-
head. While the focus of this paper has been low-bandwidth data
stream applications, our scheme is generalizable to different applications
by appropriately choosing data paths and metrics used to
construct the overlays. We believe that the results of this paper are
a significant first step towards constructing large wide-area applications
over application-layer multicast.
9.
ACKNOWLEDGMENTS
We thank Srinivas Parthasarathy for implementing a part of the
Narada protocol used in our simulation experiments. We also thank
Kevin Almeroth, Lixin Gao, Jorg Liebeherr, Steven Low, Martin
Reisslein and Malathi Veeraraghavan for providing us with user accounts
at the different sites for our wide-area experiments and we
thank Peter Druschel for shepherding the submission of the final
version of this paper.
10.
--R
Resilient overlay networks.
Based Trees (CBT): An Architecture for Scalable Multicast Routing.
Scalable Secure Group Communication over IP Mulitcast.
Scalable application layer multicast.
How to Model an Internetwork.
SCRIBE: A large-scale and decentralized application-level multicast infrastructure
Scattercast: An Architecture for Internet Broadcast Distribution as an Infrastructure Service.
RMX: Reliable Multicast for Heterogeneous Networks.
Enabling Conferencing Applications on the Internet using an Overlay Multicast Architecture.
A Case for End System Multicast.
Multicast Routing in Datagram Internetworks and Extended LANs.
Yoid: Extending the Multicast Internet Architecture
Steiner points in tree metrics don't (really) help.
Reliable Multicasting with an Overlay Network.
ALMI: An Application Level Multicast Infrastructure.
Accessing nearby copies of replicated objects in a distributed environment.
A scalable content-addressable network
A Case for Informed Internet Routing and Transport.
Chord: A scalable peer-to-peer lookup service for Internet applications
Host multicast: A framework for delivering multicast to end users.
An Infrastructure for Fault-tolerant Wide-area Location and Routing
Bayeux: An architecture for scalable and fault-tolerant wide-area data dissemination
--TR
Multicast routing in datagram internetworks and extended LANs
Accessing nearby copies of replicated objects in a distributed environment
A case for end system multicast (keynote address)
Steiner points in tree metrics don''t (really) help
Bayeux
Enabling conferencing applications on the internet using an overlay muilticast architecture
Chord
A scalable content-addressable network
Resilient overlay networks
Detour
Pastry
Application-Level Multicast Using Content-Addressable Networks
Scalable Secure Group Communication over IP Multicast
Tapestry: An Infrastructure for Fault-tolerant Wide-area Location and
Scattercast
--CTR
Mohamed Hefeeda , Ahsan Habib , Boyan Botev , Dongyan Xu , Bharat Bhargava, PROMISE: peer-to-peer media streaming using CollectCast, Proceedings of the eleventh ACM international conference on Multimedia, November 02-08, 2003, Berkeley, CA, USA
Duc A. Tran , Kien A. Hua , Tai T. Do, Scalable media streaming in large peer-to-peer networks, Proceedings of the tenth ACM international conference on Multimedia, December 01-06, 2002, Juan-les-Pins, France
Dan Rubenstein , Sneha Kasera , Don Towsley , Jim Kurose, Improving reliable multicast using active parity encoding services, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.44 n.1, p.63-78, 15 January 2004
Mojtaba Hosseini , Nicolas D. Georganas, Design of a multi-sender 3D videoconferencing application over an end system multicast protocol, Proceedings of the eleventh ACM international conference on Multimedia, November 02-08, 2003, Berkeley, CA, USA
H. Q. Guo , L. H. Ngoh , W. C. Wong , J. G. Tan, DINCast: a hop efficient dynamic multicast infrastructure for P2P computing, Future Generation Computer Systems, v.21 n.3, p.361-375, 1 March 2005
Vivek Shrivastava , Suman Banerjee, Natural selection in peer-to-peer streaming: from the cathedral to the bazaar, Proceedings of the international workshop on Network and operating systems support for digital audio and video, June 13-14, 2005, Stevenson, Washington, USA
Yang Guo , Kyoungwon Suh , Jim Kurose , Don Towsley, P2Cast: peer-to-peer patching for video on demand service, Multimedia Tools and Applications, v.33 n.2, p.109-129, May 2007
Dejan Kosti , Adolfo Rodriguez , Jeannie Albrecht , Amin Vahdat, Bullet: high bandwidth data dissemination using an overlay mesh, Proceedings of the nineteenth ACM symposium on Operating systems principles, October 19-22, 2003, Bolton Landing, NY, USA
Suman Banerjee , Seungjoon Lee , Bobby Bhattacharjee , Aravind Srinivasan, Resilient multicast using overlays, IEEE/ACM Transactions on Networking (TON), v.14 n.2, p.237-248, April 2006
Atul Singh , Miguel Castro , Peter Druschel , Antony Rowstron, Defending against eclipse attacks on overlay networks, Proceedings of the 11th workshop on ACM SIGOPS European workshop: beyond the PC, September 19-22, 2004, Leuven, Belgium
Bryan Horling , Victor Lesser, Data dissemination techniques for distributed simulation environments, Proceedings of the 36th conference on Winter simulation, December 05-08, 2004, Washington, D.C.
Yu-Wei Sung , Michael Bishop , Sanjay Rao, Enabling contribution awareness in an overlay broadcasting system, ACM SIGCOMM Computer Communication Review, v.36 n.4, October 2006
Chris GauthierDickey , Virginia Lo , Daniel Zappala, Using n-trees for scalable event ordering in peer-to-peer games, Proceedings of the international workshop on Network and operating systems support for digital audio and video, June 13-14, 2005, Stevenson, Washington, USA
Suman Banerjee , Seungjoon Lee , Bobby Bhattacharjee , Aravind Srinivasan, Resilient multicast using overlays, ACM SIGMETRICS Performance Evaluation Review, v.31 n.1, June
Zhichen Xu , Chunqiang Tang , Sujata Banerjee , Sung-Ju Lee, RITA: receiver initiated just-in-time tree adaptation for rich media distribution, Proceedings of the 13th international workshop on Network and operating systems support for digital audio and video, June 01-03, 2003, Monterey, CA, USA
Suman Banerjee , Chris Kommareddy , Bobby Bhattacharjee, Efficient peer location on the Internet, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.45 n.1, p.5-17, 15 May 2004
Reza Rejaie , Shad Stafford, A framework for architecting peer-to-peer receiver-driven overlays, Proceedings of the 14th international workshop on Network and operating systems support for digital audio and video, June 16-18, 2004, Cork, Ireland
Suman Banerjee , Seungjoon Lee , Ryan Braud , Bobby Bhattacharjee , Aravind Srinivasan, Scalable resilient media streaming, Proceedings of the 14th international workshop on Network and operating systems support for digital audio and video, June 16-18, 2004, Cork, Ireland
Kunwadee Sripanidkulchai , Aditya Ganjam , Bruce Maggs , Hui Zhang, The feasibility of supporting large-scale live streaming applications with dynamic application end-points, ACM SIGCOMM Computer Communication Review, v.34 n.4, October 2004
Kostas Katrinis , Bernhard Plattner , Bjrn Brynjlfsson , Gsli Hjlmtsson, Dynamic adaptation of source specific distribution trees for multiparty teleconferencing, Proceedings of the 2005 ACM conference on Emerging network experiment and technology, October 24-27, 2005, Toulouse, France
Meng Zhang , Jian-Guang Luo , Li Zhao , Shi-Qiang Yang, A peer-to-peer network for live media streaming using a push-pull approach, Proceedings of the 13th annual ACM international conference on Multimedia, November 06-11, 2005, Hilton, Singapore
Mohamed M. Hefeeda , Bharat K. Bhargava , David K. Y. Yau, A hybrid architecture for cost-effective on-demand media streaming, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.44 n.3, p.353-382, 20 February 2004
Ronaldo A. Ferreira , Suresh Jagannathan , Ananth Grama, Locality in structured peer-to-peer networks, Journal of Parallel and Distributed Computing, v.66 n.2, p.257-273, February 2006
Himabindu Pucha , Y. Charlie Hu , Z. Morley Mao, On the impact of research network based testbeds on wide-area experiments, Proceedings of the 6th ACM SIGCOMM on Internet measurement, October 25-27, 2006, Rio de Janeriro, Brazil
Sencun Zhu , Chao Yao , Donggang Liu , Sanjeev Setia , Sushil Jajodia, Efficient security mechanisms for overlay multicast based content delivery, Computer Communications, v.30 n.4, p.793-806, February, 2007
Xing Jin , Kan-Leung Cheng , S.-H. Gary Chan, Scalable island multicast for peer-to-peer streaming, Advances in Multimedia, v.2007 n.1, p.10-10, January 2007
Gabriel Ghinita , Panos Kalnis , Spiros Skiadopoulos, PRIVE: anonymous location-based queries in distributed mobile systems, Proceedings of the 16th international conference on World Wide Web, May 08-12, 2007, Banff, Alberta, Canada
Chiping Tang , Philip K. McKinley, Topology-aware overlay path probing, Computer Communications, v.30 n.9, p.1994-2009, June, 2007
Minh Tran , Wallapak Tavanapong , Wanida Putthividhya, OCS: An effective caching scheme for video streaming on overlay networks, Multimedia Tools and Applications, v.34 n.1, p.25-56, July 2007
Meng Zhang , Li Zhao , Yun Tang , Jian-Guang Luo , Shi-Qiang Yang, Large-scale live media streaming over peer-to-peer networks through global internet, Proceedings of the ACM workshop on Advances in peer-to-peer multimedia streaming, November 11-11, 2005, Hilton, Singapore
Chuan Wu , Baochun Li, Optimal peer selection for minimum-delay peer-to-peer streaming with rateless codes, Proceedings of the ACM workshop on Advances in peer-to-peer multimedia streaming, November 11-11, 2005, Hilton, Singapore
Jauvane C. de Oliveira , Dewan Tanvir Ahmed , Shervin Shirmohammadi, Performance Enhancement in MMOGs Using Entity Types, Proceedings of the 11th IEEE International Symposium on Distributed Simulation and Real-Time Applications, p.25-30, October 22-26, 2007
David Gotz, Scalable and adaptive streaming for non-linear media, Proceedings of the 14th annual ACM international conference on Multimedia, October 23-27, 2006, Santa Barbara, CA, USA
Guang Tan , Stephen A. Jarvis , Xinuo Chen , Daniel P. Spooner, Performance Analysis and Improvement of Overlay Construction for Peer-to-Peer Live Streaming, Simulation, v.82 n.2, p.93-106, February 2006
Yatin Chawathe, Scattercast: an adaptable broadcast distribution framework, Multimedia Systems, v.9 n.1, p.104-118, July
Sergey Gorinsky , Sugat Jain , Harrick Vin , Yongguang Zhang, Design of multicast protocols robust against inflated subscription, IEEE/ACM Transactions on Networking (TON), v.14 n.2, p.249-262, April 2006
Thorsten Strufe , Jens Wildhagen , Gnter Schfer, Towards the Construction of Attack Resistant and Efficient Overlay Streaming Topologies, Electronic Notes in Theoretical Computer Science (ENTCS), 179, p.111-121, July, 2007
Sonia Fahmy , Minseok Kwon, Characterizing overlay multicast networks and their costs, IEEE/ACM Transactions on Networking (TON), v.15 n.2, p.373-386, April 2007
Abdolreza Abdolhosseini Moghadam , Saman Barghi , Hamid Reza Rabiee , Mohammad Ghanbari, A new scheme on recovery from failure in NICE overlay protocol, Proceedings of the 1st international conference on Scalable information systems, May 30-June 01, 2006, Hong Kong
Aditya Ganjam , Hui Zhang, Connectivity restrictions in overlay multicast, Proceedings of the 14th international workshop on Network and operating systems support for digital audio and video, June 16-18, 2004, Cork, Ireland
Baochun Li , Jiang Guo , Mea Wang, iOverlay: a lightweight middleware infrastructure for overlay application implementations, Proceedings of the 5th ACM/IFIP/USENIX international conference on Middleware, October 18-22, 2004, Toronto, Canada
Mojtaba Hosseini , Nicolas D. Georganas, End system multicast protocol for collaborative virtual environments, Presence: Teleoperators and Virtual Environments, v.13 n.3, p.263-278, June 2004
Daria Antonova , Arvind Krishnamurthy , Zheng Ma , Ravi Sundaram, Managing a portfolio of overlay paths, Proceedings of the 14th international workshop on Network and operating systems support for digital audio and video, June 16-18, 2004, Cork, Ireland
Sergey Gorinsky , Sugat Jain , Harrick Vin , Yongguang Zhang, Robustness to inflated subscription in multicast congestion control, Proceedings of the conference on Applications, technologies, architectures, and protocols for computer communications, August 25-29, 2003, Karlsruhe, Germany
Kien A. Hua , Duc A. Tran, Range Multicast for Video on Demand, Multimedia Tools and Applications, v.27 n.3, p.367-391, December 2005
Mengkun Yang , Zongming Fei, A cooperative failure detection mechanism for overlay multicast, Journal of Parallel and Distributed Computing, v.67 n.6, p.635-647, June, 2007
anonymous overlay multicast, Journal of Parallel and Distributed Computing, v.66 n.9, p.1205-1216, September 2006
Shibsankar Das , Jussi Kangasharju, Evaluation of network impact of content distribution mechanisms, Proceedings of the 1st international conference on Scalable information systems, p.35-es, May 30-June 01, 2006, Hong Kong
Ying Cai , Zhan Chen , Wallapak Tavanapong, Caching collaboration and cache allocation in peer-to-peer video systems, Multimedia Tools and Applications, v.37 n.2, p.117-134, April 2008
Shudong Jin , Azer Bestavros, Small-world characteristics of internet topologies and implications on multicast scaling, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.50 n.5, p.648-666, 6 April 2006
Rob Sherwood , Seungjoon Lee , Bobby Bhattacharjee, Cooperative peer groups in NICE, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.50 n.4, p.523-544, 15 March 2006
Himabindu Pucha , Ying Zhang , Z. Morley Mao , Y. Charlie Hu, Understanding network delay changes caused by routing events, ACM SIGMETRICS Performance Evaluation Review, v.35 n.1, June 2007
Zongpeng Li , Anirban Mahanti, A progressive flow auction approach for low-cost on-demand P2P media streaming, Proceedings of the 3rd international conference on Quality of service in heterogeneous wired/wireless networks, August 07-09, 2006, Waterloo, Ontario, Canada
Tackling group-to-tree matching in large scale group communications, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.51 n.11, p.3069-3089, August, 2007
John R. Douceur , Jay R. Lorch , Thomas Moscibroda, Maximizing total upload in latency-sensitive P2P applications, Proceedings of the nineteenth annual ACM symposium on Parallel algorithms and architectures, June 09-11, 2007, San Diego, California, USA
Karthik Lakshminarayanan , Ananth Rao , Ion Stoica , Scott Shenker, End-host controlled multicast routing, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.50 n.6, p.807-825, 13 April 2006
Zongpeng Li , Baochun Li , Lap Chi Lau, On achieving maximum multicast throughput in undirected networks, IEEE/ACM Transactions on Networking (TON), v.14 n.SI, p.2467-2485, June 2006
Xiaolong Li , Aaron D. Striegel, A case for Passive Application Layer Multicast, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.51 n.11, p.3157-3171, August, 2007
Chun-Chao Yeh , Lin Siong Pui, On the frame forwarding in peer-to-peer multimedia streaming, Proceedings of the ACM workshop on Advances in peer-to-peer multimedia streaming, November 11-11, 2005, Hilton, Singapore
Miguel Castro , Peter Druschel , Anne-Marie Kermarrec , Animesh Nandi , Antony Rowstron , Atul Singh, SplitStream: high-bandwidth multicast in cooperative environments, Proceedings of the nineteenth ACM symposium on Operating systems principles, October 19-22, 2003, Bolton Landing, NY, USA
V. Kalogeraki , D. Zeinalipour-Yazti , D. Gunopulos , A. Delis, Distributed middleware architectures for scalable media services, Journal of Network and Computer Applications, v.30 n.1, p.209-243, January 2007
Eli Brosh , Asaf Levin , Yuval Shavitt, Approximation and heuristic algorithms for minimum-delay application-layer multicast trees, IEEE/ACM Transactions on Networking (TON), v.15 n.2, p.473-484, April 2007
Algorithms and Trade-Offs in Multicast Service Overlay Design, Simulation, v.82 n.6, p.369-381, June 2006
Beichuan Zhang , Wenjie Wang , Sugih Jamin , Daniel Massey , Lixia Zhang, Universal IP multicast delivery, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.50 n.6, p.781-806, 13 April 2006
Mohammad S. Obaidat, Guest Editorial: Recent Advances in Modeling and Simulation of Network Systems, Simulation, v.82 n.6, p.365-367, June 2006
Minseok Kwon , Sonia Fahmy, Path-aware overlay multicast, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.47 n.1, p.23-45, 14 January 2005
Jun-Hong Cui , Mario Gerla, A framework for realistic and systematic multicast performance evaluation, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.50 n.12, p.2054-2070, 24 August 2006
Zhi Li , Prasant Mohapatra, On investigating overlay service topologies, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.51 n.1, p.54-68, 17 January 2007
Mehran Dowlatshahi , Farzad Safaei, System architecture and mobility management for mobile immersive communications, Advances in Multimedia, v.2007 n.1, p.5-5, January 2007
optimized video peer-to-peer multicast streaming, Proceedings of the ACM workshop on Advances in peer-to-peer multimedia streaming, November 11-11, 2005, Hilton, Singapore
Patrick Th. Eugster , Pascal A. Felber , Rachid Guerraoui , Anne-Marie Kermarrec, The many faces of publish/subscribe, ACM Computing Surveys (CSUR), v.35 n.2, p.114-131, June
K. K. To , Jack Y. B. Lee, Parallel overlays for high data-rate multicast data transfer, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.51 n.1, p.31-42, 17 January 2007
Hao Yin , Chuang Lin , Feng Qiu , Xuening Liu , Dapeng Wu, TrustStream: a novel secure and scalable media streaming architecture, Proceedings of the 13th annual ACM international conference on Multimedia, November 06-11, 2005, Hilton, Singapore
Ananth Rao , Ion Stoica, An overlay MAC layer for 802.11 networks, Proceedings of the 3rd international conference on Mobile systems, applications, and services, June 06-08, 2005, Seattle, Washington
H. Saito , K. Taura , T. Chikayama, Collective Operations for Wide-Area Message Passing Systems Using Adaptive Spanning Trees, Proceedings of the 6th IEEE/ACM International Workshop on Grid Computing, p.40-48, November 13-14, 2005
Yongjun Li , James Z. Wang, Cost analysis and optimization for IP multicast group management, Computer Communications, v.30 n.8, p.1721-1730, June, 2007
Suman Banerjee , Christopher Kommareddy , Koushik Kar , Bobby Bhattacharjee , Samir Khuller, OMNI: an efficient overlay multicast infrastructure for real-time applications, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.50 n.6, p.826-841, 13 April 2006
E. W. Biersack , D. Carra , R. Lo Cigno , P. Rodriguez , P. Felber, Overlay architectures for file distribution: Fundamental performance analysis for homogeneous and heterogeneous cases, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.51 n.3, p.901-917, February, 2007
Chae Y. Lee , Ho Dong Kim, Reliable overlay multicast trees for private Internet broadcasting with multiple sessions, Computers and Operations Research, v.34 n.9, p.2849-2864, September, 2007
Tetsuya Kusumoto , Yohei Kunichika , Jiro Katto , Sakae Okubo, Tree-based application layer multicast using proactive route maintenance and its implementation, Proceedings of the ACM workshop on Advances in peer-to-peer multimedia streaming, November 11-11, 2005, Hilton, Singapore
Yang Guo , Kyoungwon Suh , Jim Kurose , Don Towsley, P2Cast: peer-to-peer patching scheme for VoD service, Proceedings of the 12th international conference on World Wide Web, May 20-24, 2003, Budapest, Hungary
Panayotis Fouliras , Spiros Xanthos , Nikolaos Tsantalis , Athanasios Manitsaris, LEMP: Lightweight Efficient Multicast Protocol for video on demand, Proceedings of the 2004 ACM symposium on Applied computing, March 14-17, 2004, Nicosia, Cyprus
Zongming Fei , Mengkun Yang, A proactive tree recovery mechanism for resilient overlay multicast, IEEE/ACM Transactions on Networking (TON), v.15 n.1, p.173-186, February 2007
Yi Cui , Baochun Li , Klara Nahrstedt, On achieving optimized capacity utilization in application overlay networks with multiple competing sessions, Proceedings of the sixteenth annual ACM symposium on Parallelism in algorithms and architectures, June 27-30, 2004, Barcelona, Spain
Yair Amir , Claudiu Danilov , Stuart Goose , David Hedqvist , Andreas Terzis, 1-800-OVERLAYS: using overlay networks to improve VoIP quality, Proceedings of the international workshop on Network and operating systems support for digital audio and video, June 13-14, 2005, Stevenson, Washington, USA
Jian-Guang Lou , Hua Cai , Jiang Li, Interactive multiview video delivery based on IP multicast, Advances in Multimedia, v.2007 n.1, p.13-13, January 2007
Jiantao Kong , Karsten Schwan, KStreams: kernel support for efficient data streaming in proxy servers, Proceedings of the international workshop on Network and operating systems support for digital audio and video, June 13-14, 2005, Stevenson, Washington, USA
Yuval Shavitt , Tomer Tankel, Hyperbolic embedding of internet graph for distance estimation and overlay construction, IEEE/ACM Transactions on Networking (TON), v.16 n.1, p.25-36, February 2008
Praveen Rao , Justin Cappos , Varun Khare , Bongki Moon , Beichuan Zhang, Net-: unified data-centric internet services, Proceedings of the 3rd USENIX international workshop on Networking meets databases, p.1-6, April 10, 2007, Cambridge, MA
Andrea Passarella , Franca Delmastro, Usability of legacy p2p multicast in multihop ad hoc networks: an experimental study, EURASIP Journal on Wireless Communications and Networking, v.2007 n.1, p.38-38, January 2007
Zhang , T. S. Eugene Ng , Animesh Nandi , Rudolf Riedi , Peter Druschel , Guohui Wang, Measurement based analysis, modeling, and synthesis of the internet delay space, Proceedings of the 6th ACM SIGCOMM on Internet measurement, October 25-27, 2006, Rio de Janeriro, Brazil
Glen MacLarty , Michael Fry, Towards a platform for wide-area overlay network deployment and management, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.51 n.8, p.2144-2162, June, 2007 | scalability;overlay networks;hierarchy;application layer multicast;peer-to-peer systems |
633055 | On the characteristics and origins of internet flow rates. | This paper considers the distribution of the rates at which flows transmit data, and the causes of these rates. First, using packet level traces from several Internet links, and summary flow statistics from an ISP backbone, we examine Internet flow rates and the relationship between the rate and other flow characteristics such as size and duration. We find, as have others, that while the distribution of flow rates is skewed, it is not as highly skewed as the distribution of flow sizes. We also find that for large flows the size and rate are highly correlated. Second, we attempt to determine the cause of the rates at which flows transmit data by developing a tool, T-RAT, to analyze packet-level TCP dynamics. In our traces, the most frequent causes appear to be network congestion and receiver window limits. | INTRODUCTION
Researchers have investigated many aspects of Internet
tra#c, including characteristics of aggregate tra#c [8, 16],
the sizes of files transferred, tra#c of particular applications
[4] and routing stability [7, 17], to name a few. One area
that has received comparatively little attention is the rate
at which applications or flows transmit data in the Inter-
net. This rate can be a#ected by any of a number of fac-
tors, including, for example, application limits on the rate
at which data is generated, bottleneck link bandwidth, net-work
congestion, the total amount of data the application
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
SIGCOMM'02, August 19-23, 2002, Pittsburgh, Pennsylvania, USA.
has to transmit, whether or not the application uses congestion
control, and host bu#er limitations. An Internet link
may well contain tra#c aggregated from many flows limited
by di#erent factors elsewhere in the network. While
each of these factors is well understood in isolation, we have
very little knowledge about their prevalence and e#ect in
the current Internet. In particular, we don't have a good
understanding of the rates typically achieved by flows, nor
are we aware of the dominant limiting factors.
A better understanding of the nature and origin of flow
rates in the Internet is important for several reasons. First,
to understand the extent to which application performance
would be improved by increased transmission rates, we must
first know what is limiting their transmission rate. Flows
limited by network congestion are in need of drastically different
attention than flows limited by host bu#er sizes. Fur-
ther, many router algorithms to control per-flow bandwidth
algorithms have been proposed, and the performance and
scalability of some of these algorithm depends on the nature
of the flow rates seen at routers [9, 10, 14]. Thus, knowing
more about these rates may inform the design of such
algorithms. Finally, knowledge about the rates and their
causes may lead to better models of Internet tra#c. Such
models could be useful in generating simulation workloads
and studying a variety of network problems.
In this paper we use data from packet traces and summary
flow level statistics collected on backbone routers and
access links to study the characteristics and origins of flow
rates in the Internet. Specifically, we examine the distribution
of flow rates seen on Internet links, and investigate
the relationship between flow rates and other characteristics
of flows such as their size and duration. Given these
macroscopic statistics, we then attempt to understand the
causes behind these flow rates. We have developed a tool,
called T-RAT, which analyzes traces of TCP connections
and infers which causes among several possibilities limited
the transmission rates of the flows.
Among our significant findings are the following. First,
confirming what has been observed previously, the distribution
of flow rates is skewed, but not as highly skewed
as flow sizes. Second, we find, somewhat surprisingly, that
flow rates strongly correlated with flow sizes. This is strong
evidence that user behavior, as evidenced by the amount
of data they transfer, is not intrinsically determined, but
rather, is a function of the speed at which files can be down-
loaded. Finally, using our analysis tool on several packet
traces, we find that the dominant rate limiting factors appear
to be congestion and receiver window limits. We then
Trace Date Length # Packets Sampled Bidirectional
Access1a Jan. 16, 2001 2 hours 22 million - Yes
Access1c Jan. 3, 2002 1 hour
Peering1 Jan. 24, 2001 45 minutes 34 million - No
Regional1a Jan. 2, 2002 1 hour 1.2 million 1 in 256 No
Regional1b Jan. 3, 2002 2 hours 2.3 million 1 in 256 No
Regional2 Jan. 3, 2002 2 hour 5 million 1 in 256 No
Table
1: Characteristics of 8 packet traces
examine the distribution of flow rates among flows in the
same causal class (i.e., flows whose rate is limited by the
same factor).
While we believe our study is the first of its kind to examine
the causes of Internet flow rates and relate these causes
to other flow characteristics, it is by no means the last word
in this area. This paper raises the question, but it leaves
many issues unaddressed. However, the value in our work
is a new tool that allows for further investigation of this
problem, and an initial look at the answers it can provide.
Also, while we address flow rates from a somewhat di#er-
ent angle, our paper is not the first to study Internet flow
rates. A preliminary look at Internet flow rates in a small
number of packet traces found the distribution of rates to
be skewed, but not as highly skewed as the flow size distribution
[14]. This result was consistent with observation in
[10] that a small number of flows accounted for a significant
number of the total bytes. In recent work, Sarvotham et al
[20] found that a single high rate flow usually accounts for
the burstiness in aggregate tra#c. In [2], the authors look at
the distribution of throughput across connections between
hosts and a web server and find that the rates are often consistent
with a log-normal distribution. These papers have all
made important observations. In this paper, we aim to go
beyond this previous work, looking at flow rates making up
aggregate tra#c and attempting to understand their causes.
The rest of this paper is organized as follows. In the next
section we describe the data sets and methodology used in
this study. In Section 3 we present various statistics concerning
flow rates and related measures. We then describe
our rate analyzing tool in Section 4, describe our e#orts to
validate its performance in Section 5, and present results of
applying it to packet traces in Section 6. We present some
conclusions in Section 7.
2. DATASETS AND METHODOLOGY
We used data from two sources in our study. The first
set of data consisted of 8 packet traces collected over a 14
month period. The traces were collected at high speed access
links connecting two sites to the Internet; a peering link
between two Tier 1 providers; and two sites on a backbone
network. The latter 3 traces were sampled pseudo-randomly
(using a hash on the packet header fields) at a rate of 1/256.
Sampling was on a per-flow basis, so that all packets from
a sampled flow were captured. The packet monitors at the
access links saw all tra#c going between the monitored sites
and the Internet, so both directions of connections were included
in the traces. For the other traces, because of asymmetric
routing often only one direction of a connection is
visible. The finite duration of the traces (30 minutes to 2
hours) introduces a bias against the largest and most long-lived
flows. However, the e#ect of truncation on flow rates,
the statistic in which we are most interested, should not be
significant. The characteristics of the traces are summarized
in
Table
1.
We supplemented the packet level traces with summary
flow level statistics from 19 backbone routers in a Tier 1
provider. Data was collected for 24 hours from the 19 routers
on each of 4 days between July, 2000 and November, 2001,
yielding 76 sets of data. Because this data was collected
concurrently from routers in the same backbone provider, a
single flow can be present in more than one of the datasets.
We do not know how often this occurred. However, the
routers represent a relatively small fraction of the provider's
routers, so we expect that each dataset contains a relatively
unique set of flows.
Records in these datasets contain the IP addresses of the
endpoints, port numbers, higher layer protocol, the start
time and end time for the flow, the total number of packets
and the total number of bytes. Since these datasets lack
packet level details, we cannot use them for the trace analysis
in Section 4. However, they provide a useful supplement
to our results in Section 3, greatly broadening the scope of
the data beyond the limited number of packet traces. Each
of the 4 days of summary statistics represents between 4 and
6 billion packets and between 1.5 and 2.5 terabytes of data.
Flows can be defined by either their source and destination
addresses, or by addresses, port numbers and protocol.
The appropriateness of a definition depends in part on what
one is studying. For instance, when studying router definitions
that do per-flow processing, the former definition may
be appropriate. When examining the characteristics of individual
transport layer connections the latter is preferred.
For the results reported in this paper, we used the 5-tuple
of IP addresses, port numbers, and protocol number. We
also generated results defining flows by source and destination
IP addresses only. Those results are not qualitatively
di#erent. Also, for the results presented here, we used a
second timeout to decide that an idle flow has termi-
nated. Repeating the tests with a 15 second timeout again
did not qualitatively a#ect the results.
In the analysis that follows we report on some basic per-flow
statistics, including flow size, duration and rate. Size
is merely the aggregate number of bytes transferred in the
flow (including headers), and duration is the time elapsed
between the first and last packets of a flow. Flow rate is
also straightforward (size divided by duration) with the exception
that determining a flow rate for very short flows is
Cumulative
Fraction
Flow Rate (bits/sec)
Peering1
Regional1a
Regional1b
Regional2
Figure
1: Complementary distribution of flow
Cumulative
Fraction
Flow Size (bytes)
Peering1
Regional1a
Regional1b
Regional2
Figure
2: Complementary distribution of flow
Cumulative
Fraction
Flow Duration (sec)
Peering1
Regional1a
Regional1b
Regional2
Figure
3: Complementary distribution of flow dura-
tion
problematic. In particular, rate is not well-defined for single
packet flows whose duration by definition is zero. Similarly,
flows of very short (but non-zero) duration also present a
problem. It does not seem reasonable to say that a 2-packet
flow that sends these packets back-to-back has an average
rate equal to the line rate. In general, since we are most
interested in the rate at which applications transmit data,
when calculating rates we ignore flows of duration less than
100 msec, since the timing of these flows' packets may be
determined as much by queueing delays inside the network
as by actual transmission times at the source.
3. CHARACTERISTICS
In this section we examine the characteristics of Internet
flows. We begin by looking at the distributions of rate,
size and duration, before turning to the question of relationships
among them. Throughout, we start with data from the
packet traces, and then supplement this with the summary
flow data.
3.1 Rate Distribution
Figure
1 plots the complementary distribution of flow
rates, for flows lasting longer than 100 msec, in the 8 packet
traces. The distributions show that average rates vary over
several orders of magnitude. Most flows are relatively slow,
with average rates less than 10kbps. However, the fastest
flows in each trace transmit at rates above 1Mbps; in some
traces the top speed is over 10Mbps. For comparison, we
also show the complementary distributions of flow size and
duration in Figures 2 and 3, respectively. The striking difference
here is the longer tail evident in the distributions of
flow sizes for the packet traces. One possible explanation
of this di#erence is that file sizes are potentially unbounded
while flow rates are constrained by link bandwidths.
A previous study of rate distributions at a web server suggested
that the rate distributions were well described by a
log-normal distribution [2]. To test that hypothesis, we use
the quantile-quantile plot (Q-Q plot) [3] to compare the flow
rate distribution with analytical models. The Q-Q plot determines
whether a data set has a particular theoretical distribution
by plotting the quantiles of the data set against
the quantiles of the theoretical distribution. If the data
comes from a population with the given theoretical distri-
bution, then the resulting scatter plot will be approximately
a straight line. The Q-Q plots in Figures 4 and 5 compare
the log of the rate distribution to the normal distribution for
two of the traces (Access1c and Regional2). The fit between
the two is visually good. As in Reference [2], we further
assess the goodness-of-fit using the Shapiro-Wilk normality
test [5]. For Access1c (Figure 4), we can not reject the null
hypothesis that the log of rate comes from normal distribution
at 25% significance level; for Regional2 (Figure 5), we
can not reject normality at any level of significance. This
suggests the fit for a normal distribution is indeed very good.
Applying the Shapiro-Wilk test on all the packet traces and
flow summary data, we find that for 60% of the data sets we
can not reject normality at 5% significance level. These results
give evidence that the flow rates can often be described
with a log-normal distribution.
The next question we address is how important the fast
flows are. In particular, how much of the total bytes transferred
are accounted for by the fastest flows? Note that a
skewed rate distribution need not imply that fast flows account
for a large fraction of the bytes. This will depend on
the size of fast flows. Figure 6 plots the fraction of bytes
accounted for in a given percentage of the fastest flows for
the 8 packet traces. We see that in general, the 10% fastest
flows account for between 30% and 90% of all bytes trans-
ferred, and the 20% fast flows account for between 55% and
95%. This indicates that while most flows are not fast, these
fast flows do account for a significant fraction of all tra#c.
Figure
7 shows results for the summary flow data. This
.
log(Rate)
-22
Figure
4: Q-Q plot for Access1c trace
log(Rate)
-22
Figure
5: Q-Q plot for Regional2 trace0.10.30.50.70.90
Fraction
of
Bytes
Fraction of Fast Flows (D >= 100 msec)
Peering1
Regional1a
Regional1b
Regional2
Figure
Fraction of bytes in fastest flows0.10.30.50.70.90
Fraction
of
Bytes
Fraction of Datasets
top 20% fast flows
top 10% fast flows
Figure
7: Distribution of the fraction of bytes in
the 10% and 20% fastest flows for summary flow
data.
figure plots the distribution of the percentage of bytes accounted
for by the 10% and 20% fastest flows across the
76 sets of data. The skewed distributions exhibited in the
traces are evident here as well. For example, in over 80% of
the datasets, the fastest 10% of the flows account for at least
50% of the bytes transferred. Similarly, the fastest 20% of
the flows account for over 65% of the bytes in 80% of the
datasets. For comparison, the fraction of bytes in the largest
flows (not shown) is even greater.
We now characterize flows along two dimensions: big or
small, and fast or slow. We chose 100 KByte as a cuto# on
the size dimension and 10 KByte/sec on the rate dimension.
These thresholds are arbitrary, but they provide a way to
characterize flows in a two-by-two taxonomy.
Table
2 shows the fraction of flows and bytes in each of the
4 categories for the packet traces. Flows that are small and
slow, the largest group in each trace, account for between
44% and 63% of flows. However, they account for a relatively
small fraction of the total bytes (10% or less.) There are
also a significant number of flows in the small-fast category
(between 30% and 40%) but these too represent a modest
fraction of the total bytes (less than 10% in all but one
trace.) On the other hand, there are a small number of
flows that are both big and fast (generally less than 10%).
These flows account for the bulk of the bytes transferred-at
least 60% in all of the traces, and over 80% in many of them.
The big-slow category is sparsely populated and these flows
account for less than 10% of the bytes. Data for the 76 sets
of summary flow statistics (not shown here) are generally
consistent with the packet trace results.
One question about Internet dynamics is the degree to
which tra#c is dominated (in di#erent ways) by small flows.
In terms of the number of flows, there is little doubt that the
vast majority are indeed small. More than 84% of the flows
in all of our traces (and over 90% in some of them) meet our
(arbitrary) definition of small. However, before we conclude
that the Internet is dominated by these small flows and that
future designs should be geared towards dealing with them,
we should remember that a very large share of the bytes
are in big and fast flows. In 6 of the 8 traces we examined,
these flows comprised over 80% of the bytes. Thus, when
designing mechanism to control congestion or otherwise deal
with tra#c arriving at a router, these big and fast flows are
an important (and sometimes dominant) factor.
3.2 Correlations
We next examine the relationship between the flow characteristics
of interest. Table 3 shows 3 pairs of correlations-
duration and rate, size and rate, and duration and size-for
the 8 packet traces. We computed correlations of the log of
these data because of the large range and uneven distribu-
tion. We restricted the correlations to flows with durations
longer than 5 seconds. Results for the other flow definitions
are similar.
The correlations are fairly consistent across traces, and
show a negative correlation between duration and rate, a
Small-Slow Small-Fast Big-Slow Big-Fast
Trace flows bytes flows bytes flows bytes flows bytes
Table
2: Fraction of flows and bytes in Small/Slow, Small/Fast, Big/Slow and Big/Fast flows.
Trace logD,logR logS,logR logD,logS
Peering1 -0.319 0.847 0.235
Regional1a -0.453 0.842 0.100
Regional1b -0.432 0.835 0.136
Regional2 -0.209 0.877 0.287
Table
3: Correlations of size, rate and duration in 8
packet traces
slight positive correlation between size and duration and a
strong correlation between the size and rate. The correlation
between rate and size is also evident in other subsets of flows.
For flows longer than 1 second, the correlations range from
.65 to .77. For flows lasting longer than seconds, the
correlations range from .90 to .95.
Figure
8 shows CDFs of the 3 correlations taken across
each of our datasets (packet traces and summary flow level
statistics). This figure shows that the general trend exhibited
in the packet traces was also evident in the summary
flow data we examined.
The most striking result here is the correlation between
size and rate. If users first decided how much data they
wanted to transfer (e.g., the size of a file) independent of
the network conditions, and then sent it over the network,
there would be little correlation between size and rate, 1 a
strong correlation between size and duration, and a strongly
negative correlation between rate and duration. This is not
what we see; the negative correlation between rate and duration
is fairly weak, the correlation between size and duration
is very weak, and the correlation between size and rate
is very strong. Thus, users appear to choose the size of their
transfer based, strongly, on the available bandwidth. While
some adjustment of user behavior was to be expected, we
were surprised at the extent of the correlation between size
and rate.
could cause some correlation between rate
and size. In order to assess the impact of slow-start on the
correlations we observed, we eliminated the first 1 second of
all flows and recomputed the correlations. For flows lasting
longer than 5 seconds, the resulting correlations between size
and rate in the 8 traces ranged from .87 to .92, eliminating
slow-start as a significant cause of the correlation.0.20.61
Cumulative
Fraction
Correlation Coefficient
corr(logD,logR)
corr(logS,logR)
Figure
8: CDF of correlations of size, rate and duration
across all datasets
4. TCP RATE ANALYSIS TOOL
In the previous section we looked at flow rates and their
relationship to other flow characteristics. We now turn our
attention to understanding the origins of the rates of flows
we observed. We restrict our attention to TCP flows for two
reasons. First, TCP is used by most tra#c in the Internet
[21]. Second, the congestion and flow control mechanisms
in TCP give us the opportunity to understand and explain
the reasons behind the resulting transmission rates. In this
section we describe a tool we built, called T-RAT (for TCP
Rate Analysis Tool) that examines TCP-level dynamics in
a packet trace and attempts to determine the factor that
limits each flow's transmission rate.
T-RAT leverages the principles underlying TCP. In partic-
ular, it uses knowledge about TCP to determine the number
of packets in each flight and to make a rate limit determination
based on the dynamics of successive flights. However,
as will become evident from the discussion below, principles
alone are not su#cient to accomplish this goal. By necessity
T-RAT makes use of many heuristics which through experience
have been found to be useful.
Before describing how T-RAT works, we first review the
requirements that motivate its design. These include the
range of behavior it needs to identify as well as the environment
in which we want to use it.
The rate at which a TCP connection transmits data can
be determined by any of several factors. We characterize
the possible rate limiting factors as follows:
. Opportunity limited: the application has a limited
amount of data to send and never leaves slow-start.
This places an upper bound on how fast it can transmit
data.
. Congestion limited: the sender's congestion window is
adjusted according to TCP's congestion control algorithm
in response to detecting packet loss.
. Transport limited: the sender is doing congestion avoid-
ance, but doesn't experience any loss.
. Receiver window limited: the sending rate is limited
by the receiver's maximum advertised window.
. Sender window limited: the sending rate is constrained
by bu#er space at the sender, which limits the amount
of unacknowledged data that can be outstanding at
any time.
. Bandwidth limited: the sender fully utilizes, and is
limited by, the bandwidth on the bottleneck link. The
sender may experience loss in this case. However, it is
di#erent from congestion limited in that the sender is
not competing with any other flows on the bottleneck
link. An example would be a connection constrained
by an access modem.
. Application limited: the application does not produce
data fast enough to be limited by either the transport
layer or by network bandwidth.
We had the following requirements in designing T-RAT.
First, we do not require that an entire TCP connection,
or even its beginning, be observed. This prevents any bias
against long-lived flows in a trace of limited duration. Sec-
ond, we would like the tool to work on traces recorded at
arbitrary places in the network. Thus, the analyzer may
only see one side of a connection, and it needs to work even
if it was not captured near either the sender or receiver. Fi-
nally, to work with large traces, our tool must work in a
streaming fashion to avoid having to read the entire trace
into memory.
T-RAT works by grouping packets into flights and then
determining a rate limiting factor based on the behavior of
groups of adjacent flights. This entails three main compo-
nents: (i) estimating the Maximum Segment Size (MSS) for
the connection, (ii) estimating the round trip time, and (iii)
analyzing the limit on the rate achieved by the connection.
We now describe these components in more detail. As mentioned
above, T-RAT works with either the data stream,
acknowledgment stream, or both. In what follows, we identify
those cases when the algorithm is by necessity di#erent
for the data and the acknowledgment streams.
4.1 MSS Estimator
The analysis requires that we have an estimate of the MSS
for a connection. When the trace contains data packets, we
set the MSS to the largest packet size observed. When the
trace contains only acknowledgments, estimating the MSS is
more subtle, since there need not be a 1-to-1 correspondence
between data and acknowledgment packets. In this case, we
estimate the MSS by looking for the most frequent common
divisor. This is similar to the greatest common divisor,
however, we apply heuristics to avoid looking for divisors of
numbers of bytes acknowledged that are not multiples of the
MSS.
4.2 Round Trip Time Estimator
In this section, we present a general algorithm for estimating
RTT based on packet-level TCP traces. RTT estimation
is not our primary goal but, rather, a necessary component
of our rate analyzer. As such, we ultimately judge the algorithm
not by how accurately it estimates RTT (though
we do care about that) but by whether it is good enough to
allow the rate analyzer to make correct decisions.
There are three basic steps to the RTT estimation algo-
rithm. First, we generate a set of candidate RTTs. Then
for each candidate RTT we assess how good an estimate of
the actual RTT it is. We do this by grouping packets into
flights based on the candidate RTT and then determining
how consistent the behavior of groups of consecutive flights
is with identifiable TCP behavior. Then we choose the candidate
RTT that is most consistent with TCP. We expand
on each of these steps below.
We generate 27 candidates,
3 sec, where This covers the range
of round trip times we would normally expect anywhere beyond
the local network.
Assume we have a stream of packets, P i , each with arrival
time T i and an inter-arrival interval #P . For
a candidate RTT, we group packets into flights as follows.
Given the first packet, P0 , in a flight, we determine the first
packet in the next flight by examining #P i for all packets
with arrival times between T0
where fac is a factor to accommodate variation in the round
time. We identify the packet P1 with the largest inter-arrival
time in this interval. We also examine P2 , the first
packet that arrives after T0 +fac -RTT . If #P2 # 2-#P1 , we
choose P2 as the first packet of the next flight. Otherwise,
we choose P1 .
There is an obvious tradeo# in the choice of fac. We
need fac to be large enough to cover the variation of RTT.
However, setting fac too large will introduce too much noise,
thereby reducing the accuracy of the algorithm. Currently,
we set fac to 1.7, which is empirically optimal among 1.1,
1.2, ., 2.0.
Once a set of flights, F i (i # 0) has been identified for a
candidate RTT, we evaluate it by attempting to match its
behavior to that of TCP. Specifically, we see whether the
behavior of successive flights is consistent with slow-start,
congestion avoidance, or response to loss. We elaborate on
how we identify each of these three behaviors.
Testing for Packet Loss: When the trace contains data
packets, we infer packet loss by looking for retransmissions.
Let seqB be the largest sequence number seen before flight
F . We can conclude that F has packet loss recovery (and
a prior flight experienced loss) if and only if we see at least
one data packet in F with upper sequence number less than
or equal to seqB . For the acknowledgment stream, we infer
packet loss by looking for duplicate acknowledgments.
Like TCP, we report a packet loss whenever we see three
duplicate acknowledgments. In addition, if a flight has no
more than 4 acknowledgment packets, we report a packet
loss whenever we see a single duplicate. The latter helps
to detect loss when the congestion window is small, which
often leads to timeouts and significantly alters the timing
characteristics. These tests are robust to packet reordering
as long as it does not span flight boundaries or cause 3 duplicate
acknowledgments.
Testing for Congestion Avoidance: Given a flight F ,
define its flight size, SF , in terms of the number of MSS
packets it contains:
MSS
where seq -
F is the largest sequence number seen before F ,
F is the largest sequence number seen before the next
flight. We define a flight's duration DF as the lag between
the arrival of the first packet of F and the first packet in the
subsequent flight.
Testing whether four consecutive flights 2 F i
are consistent with congestion avoidance requires determining
whether the flight sizes, SF i , exhibit an additive increase
pattern. The test is trivial when the receiver acknowledges
every packet. In this case, we only need to test whether
2.
The test is considerably more complex with delayed ac-
knowledgments. In this case, the sizes of successive flights
need not increase by 1. Because only every other packet
is acknowledged, the sender's congestion window increases
by 1 on average every second flight. Further, because the
flight size is equal to the sender's window minus unacknowledged
packets, the size of successive flights may decrease
when the acknowledgment for last packet in the prior flight
is delayed. Hence, sequences of flight sizes like the following
are common:
| {z }
| {z }
In our algorithm, we consider flights F i
be consistent with congestion avoidance if and only if the
following three conditions are met:
1. -2 # SF i - predicted
predicted
) is the predicted number
of segments in flight F i .
2. The flight sizes are not too small and have an overall
non-decreasing pattern. Specifically, we apply the following
three tests. (i) SF
3. The flight durations are not too di#erent. More specif-
ically,
The first condition above captures additive increase patterns
with and without delayed acknowledgments. The second
and third conditions are sanity checks.
Testing for Slow-Start: As was the case with congestion
avoidance, TCP dynamics di#er substantially during
slow-start with and without delayed acknowledgments. We
apply di#erent tests for each of the two cases and classify
the behavior as consistent with slow-start if either test is
passed.
To capture slow-start behavior without delayed acknowl-
edgments, we only need to test whether SF
2.
The following test captures slow-start dynamics when delayed
acknowledgments are used. We consider flights F i
2 As the discussion below on delayed acknowledgments in-
dicates, and as confirmed by experience, four consecutive
flights is the smallest number that in most cases allows us
to identify the behavior correctly.
to be consistent with slow-start behavior if and
only if the following two conditions are met:
1. -3 # SF i - predicted
predicted
is the predicted number
of segments in flight F i , ACKF i-1 is the estimated
number of non-duplicate acknowledgment packets
in flight F i-1 . (For an acknowledgment stream,
can be counted directly. For a data stream,
we estimate ACKF i-1 as #SF i-1 /2#.)
2. The flight sizes are not too small and have an over-all
non-decreasing pattern. Specifically, we apply the
following tests: (i) SF i # SF i-1
The first condition captures the behavior of slow-start with
and without delayed acknowledgments. The second and
third are sanity checks.
Analyzing TCP Dynamics: Having described how we
identify slow-start and congestion avoidance, and how we
detect loss, we now present our algorithm for assessing how
good a set of flights, F i , generated for a candidate RTT is.
Let c be the index of the current flight to be examined. Let
s be the state of the current flight: one of CA, SS or UN, for
congestion avoidance, slow-start and unknown, respectively.
Initially, . For a given flight, Fc , we determine the
state by examining Fc , Fc+1 , Fc+2 and Fc+3 and applying
the following state transitions.
.
- If there is loss in at least one of the 4 flights, then
s transitions to UN.
- If the 4 flights show additive increase behavior as
described above then we remain in state CA.
- Similarly, we also remain in state CA even if we
don't recognize the behavior. As with TCP, we
only leave CA if there is packet loss.
.
- If there is loss in at least one of the 4 flights, then
s transitions to UN.
- If the 4 flights are consistent with multiplicative
increase, then s remains SS.
Otherwise, s transitions to UN.
Note that we can leave state SS when there is packet
loss or there is some flight we do not understand.
.
- If there is loss in at least one flight, s remains UN.
- If the four flights are consistent with the multiplicative
increase behavior then s transitions to
SS.
- If the four flights are consistent with additive in-
crease, s transitions to CA.
Otherwise, s remains UN.
As we analyze the set of flights, we sum up the number
of packets in flights that are either in CA or SS. We assign
this number as the score for a candidate RTT and select the
candidate with the highest score.
We have made several refinements to the algorithms described
above, which we briefly mention here but do not
describe further. First, when testing for slow-start or congestion
avoidance behavior, if an initial test of a set of flights
fails, we see whether splitting a flight into two flights or
coalescing two adjacent flights yields a set of flights that
matches the behavior in question. If so, we adjust the flight
boundaries. Second, to accommodate variations in RTT,
we continually update the RTT estimate using an exponentially
weighted moving average of the durations of successive
flights. Third, in cases where several candidate RTTs yield
similar scores, we enhance the algorithm to disambiguate
these candidates and eliminate those with very large or very
small RTTs. This also allows us to reduce the number of
candidate RTTs we examine.
4.3 Rate Limit Analysis
Using the chosen RTT, we apply our rate limit analysis
to determine the factor limiting a flow's transmission rate.
Since conditions can change over the lifetime of a flow, we
continually monitor the behavior of successive flights. We
periodically check the number of packets seen for a flow.
Every time we see 256 packets, or when no packets are seen
for a 15 second interval, we make a rate limit determination.
We now describe the specific tests we apply to determine the
rate limiting factor.
Bandwidth Limited: A flow is considered bandwidth
limited if it satisfies either of the following two tests. The
first is that it repeatedly achieves the same amount of data in
flight prior to loss. Specifically, this is the case if: (i) there
were at least 3 flights with retransmissions; and (ii) the
maximum and minimum flight sizes before the loss occurs
di#er by no more than the MSS.
The second test classifies a flow as bandwidth limited if it
sustains the link bandwidth on its bottleneck link. Rather
than attempting to estimate the link bandwidth, we look
for flows in which packets are nearly equally-spaced. Specif-
ically, a flow is considered bandwidth limited if T hi < 2#T lo ,
where T lo is the 5 th percentile of the inter-packet times 3 and
T hi is the P th percentile. We set
#flights
#packets )). P must be a function of the flight size. Other-
wise, we risk classifying sender and receiver window limited
flows that have large flight sizes as bandwidth limited.
Congestion Limited: A flow is considered congestion
limited if it experienced loss and it does not satisfy the first
test for bandwidth limited.
Receiver Window Limited: We can only determine
that a flow is receiver window limited when the trace contains
acknowledgments since they indicate the receiver's advertised
window. We determine a flow to be receiver window
limited if we find 3 consecutive flights F i
flight sizes S i - MSS > awndmax - 3 - MSS, where awndmax
is the largest receiver advertised window size. The di#er-
3 In fact, because delayed acknowledgments can cause what
would otherwise be evenly spaced packets to be transmitted
in bursts of 2, we cannot use the inter-packet times
directly in this calculation. For data packets, instead of
using the inter-arrival distribution, #P i , directly, we use
ence of 3 # MSS is a heuristic that accommodates variations
due to delayed acknowledgments and assumes that the MSS
need not divide the advertised window evenly.
Sender Window Limited: Let SF med and SF 80 be the
median and the 80 th percentile of the flight sizes. A flow
is considered sender window limited if the following three
conditions are met. First, the flow is not receiver window
limited, congestion limited, or bandwidth limited. Second,
med
3. Finally, there are four consecutive flights
with flight sizes between SF
Opportunity Limited: A flow is deemed opportunity
limited if the total number of bytes transferred is less than
# MSS or if it never exits slow-start. The limit of 13 is
needed because it is di#cult to recognize slow-start behavior
with fewer than 13 packets.
Application Limited: A flow is application limited if a
packet smaller than the MSS was transmitted followed by a
lull greater than the RTT, followed by additional data.
Transport Limited: A flow is transport limited if the
sender has entered congestion avoidance, does not experience
any loss, and the flight size continues to grow.
T-RAT is not able to identify unambiguously the rate limiting
behaviors in all cases. Therefore, the tool reports two
additional conditions.
Host Window Limited: The connection is determined
to be limited by either the sender window or the receiver
window, but the tool cannot determine which. When acknowledgments
are not present and the flow passes the sender
window limited test above, it is classified as host window
limited.
Unknown Limited: The tool is unable to match the
connection to any of the specified behaviors.
5. VALIDATION
Before using T-RAT to analyze the rate limiting factors
for TCP flows in our packet traces, we first validated it
against measurement data as well as simulations. Specifi-
cally, we compared T-RAT's round trip time estimation to
estimates provided by tcpanaly [15] over the NPD N2 [18]
dataset. 4 Accurate RTT estimation is a fundamental component
of the tool since making a rate-limit determination
is in most cases not possible without being able to group
packets into flights. Once we validated the RTT estimation,
we then needed to determine whether the rate analyzer returned
the right answer. Validating the results against actual
network tra#c is problematic. After all, that is the
problem we intend to solve with this tool. Thus, we validated
T-RAT against packet traces produced by network
simulations and by controlled network experiments in which
we could determine the specific factors that limited each
flow's transmission rate.
5.1 RTT validation
The NPD N2 dataset contains packet traces for over 18, 000
connections. We used 17, 248 of these in which packets
were captured at both ends of the connections, so the dataset
contains data and acknowledgment packets recorded at both
the sender and receiver. We ran tcpanaly over this data and
4 tcpanaly requires traces of both directions of a connection.
Therefore, we can use it to validate our tool using 2-way
traces, but it cannot address the RTT estimation problem
when only a single direction of the connection is available.
Cumulative
Fraction
Accurate within a factor of X
Data-based sender-side estimation
Data-based receiver-side estimation
Ack-based sender-side estimation
Ack-based receiver-side estimation
estimation
Figure
9: RTT validation against NPD N2 data
recorded for each connection the median of the RTT estimates
it produced. We used these medians to compare to
the performance of the RTT estimation of T-RAT.
Even though the NPD data includes both directions of
connections, we tested our RTT estimation using only a
single direction at a time (since the algorithm is designed
to work in such cases.) Hence, we consider separately the
cases in which the tool sees the data packets at the sender,
acknowledgment packets at the sender, data packets at the
receiver and acknowledgment packets at the receiver. For
each RTT estimate computed by T-RAT we measure its accuracy
by comparing it to the value produced by tcpanaly.
The results of the RTT validation are shown in Figure 9,
which plots the CDF of the ratio between the two values for
each of the 4 cases. The figure shows that with access to the
data packets at either the sender or the receiver, for over
90% of the traces the estimated RTT is accurate within a
factor of 1.15, and for over 95% of the traces the estimated
RTT is accurate within a factor of 1.3. Accuracy of RTT
estimates based on the acknowledgment stream, while still
encouraging, is not as good as data stream analysis. In
particular, with ack-based analysis at the receiver, 90% of
estimates are accurate within a factor of 1.3 and 95% of
traces are accurate within a factor of 1.6. Using the sender-side
acknowledgment stream, estimates are accurate within
a factor of 1.6 about 90% of the time. We suspect that delayed
acknowledgments may be in part responsible for the
estimation using the acknowledgment stream.
By reducing the number of packets observable per RTT and
perturbing the timing of some packets, they may make the
job of RTT estimation more di#cult. Further, we speculate
that the sender side performance with acknowledgments also
su#ers because the acknowledgments at the sender have traversed
an extra queue and are therefore subject to additional
variation in the network delays they experience.
Previous studies have used the round trip time for the
initial TCP SYN-ACK handshake as an estimate of per-connection
round trip time [6, 11]. We also compared this
value to the median value produce by tcpanaly. As shown in
Figure
this estimate is significantly worse than the others.
In general, the SYN-ACK handshake tends to underestimate
the actual round trip time.
The overall results produced by our tool are encourag-
ing. They show that RTT estimation works reasonably well
in most cases. The real question, however, is how the rate
analyzer works. Are the errors in RTT estimation small
enough to allow the tool to properly determine a rate limiting
factor, or do the errors prevent accurate analysis? We
now turn to the question of the validity of the rate limiting
factors.
5.2 Rate Limit Validation
We validated the rate limit results of T-RAT using both
simulations and experiments in a controlled testbed. In our
simulations, we used the ns simulator [13]. By controlling
the simulated network and endpoint parameters we created
TCP connections that exhibited various rate limiting behav-
iors. For example, congested limited behavior was simulated
using several infinite source FTP connections traversing a
shared bottleneck link, and application limited behavior was
simulated using long-lived Telnet sessions. Our simulations
included approximately 400 connections and 340,000 pack-
ets. T-RAT correctly identified the proper rate limiting behavior
for over 99% of the connections.
While these simulations provided positive results about
the performance of T-RAT, they su#ered from several weak-
nesses. First, we were not able to validate all of the rate
limiting behaviors that T-RAT was designed to identify. In
particular, the TCP implementation in ns does not include
the advertised window in TCP packets, preventing experiments
that exhibited receiver window limited behavior. Sec-
ond, the simulations varied some relevant parameters, but
they did not explore the parameter space in a systematic
way. This left us with little knowledge about the limits of
the tool. Finally, simulations abstract away many details of
actual operating system and protocol performance, leaving
questions about how the tool would perform on real systems.
To further validate the performance of T-RAT we conducted
experiments in a testbed consisting of PCs running
the FreeBSD 4.3 operating system. In these experiments,
two PCs acting as routers were connected by a bottleneck
link. Each of these routers was also connected to a high
speed LAN. Hosts on these LANs sent and received tra#c
across the bottleneck link. We used the dummynet [19] facility
in the FreeBSD kernel to emulate di#erent bandwidths,
propagation delays and bu#er sizes on the bottleneck link.
We devised a series of experiments intended to elicit various
rate limiting behaviors, captured packet traces from the
TCP connections in these experiments using tcpdump, analyzed
these traces using T-RAT, and validated the results
reported by T-RAT against the expected behavior. Unless
otherwise noted, the bandwidth, propagation delay, and
bu#er size on the emulated link were 1.5 Mbps, 25 msec, and
KBytes, respectively. We used an MTU of 540 bytes on
all interfaces, allowing us to explore a wider range of window
sizes (in terms of packets) than would be a#orded with
a larger MTU.
For some of the rate limiting behaviors, we captured TCP
connections on both unloaded and loaded links. In order to
produce background load, we generated bursts of UDP traffic
at exponentially distributed intervals. The burst size was
varied from 1 to 4 packets across experiments, and the average
inter-burst interval was generating 10%, 20%,
30% and 40% load on the link. This was not intended to
model realistic tra#c. Rather the intention was to perturb
the timing of the TCP packets and assess the e#ect of this
perturbation on the ability of T-RAT to identify correctly
the rate limiting behavior in question.
Experiments were repeated with and without delayed ac-
knowledgments. All TCP packets were captured at both
endpoints of the connection. We tested T-RAT using only
a single direction of a connection at a time (either data or
acknowledgment) to emulate the more challenging scenario
of only observing one direction of a connection. Thus, for
each connection we made four independent assessments using
data packets at the source, data packets at the destina-
tion, acknowledgment packets at the source, and acknowledgment
packets at the destination.
For each behavior we varied parameters in order to assess
how well T-RAT works under a range of conditions. Our
exploration of the relevant parameter space is by no means
exhaustive, but the extensive experiments we conducted give
us confidence about the operation of the tool.
In the vast majority of cases T-RAT correctly identified
the dominant rate limiting factor. That is, for a given con-
nection, the majority of periodic determinations made by
T-RAT were correct. Further, for many connections, all of
the periodic determinations were correct. In what follows,
we summarize the experiments and their results, focusing
on those cases that were most interesting or problematic. 5
Receiver Window Limited: In these experiments, the
maximum advertised receiver window was varied (by adjusting
the receiver's socket bu#er) for each connection, while
the sender's window was larger than the bandwidth delay
product of the link (and hence did not impact the sender's
window.) The parameters of the bottleneck link were such
that a window size of packets saturated the link. We
tested window sizes between 2 and 20 packets with no background
load. Even when the link was saturated, there was
su#cient bu#ering to prevent packet loss. With background
load, we only tested window sizes up to 10 packets to avoid
loss due to congestion. A 5 MByte file was transferred for
each connection.
T-RAT successfully identified these connections as receiver
window limited (using the acknowledgement stream) and
host window limited (using the data stream) in most cases.
Using the data stream, it did not correctly identify window
sizes of 2 packets as receiver window limited. It is not possible
to disambiguate this case from a bandwidth limited
connection captured upstream of the bottleneck link when
delayed acknowledgments are present. In both cases, the
trace shows periodic transmission of a burst of 2 packets
followed by an idle period. We would not expect receiver
window limits to result in flight sizes of 2 packets, so we are
not concerned about this failure mode.
T-RAT was able to identify a wide range of window sizes
as receiver window limited (or host window limited using
data packets.) As the number of packets in flight approaches
the saturation point of the link, and as a consequence the
time between successive flights approaches the inter-packet
time, identifying flight boundaries becomes more di#cult.
When the tool had access to the data stream, it correctly
identified the window limit until the link utilization approached
80%-90% of the link bandwidth. Beyond that it
identified the connection as bandwidth limited. With access
to the acknowledgment stream, the tool correctly identified
the behavior as receiver window limited until the link was
fully saturated.
As we applied background tra#c to the link, the dominant
cause identified for each connection was still receiver
5 More detailed information about the results is available at
http://www.research.att.com/projects/T-RAT/.
window limited for acknowledgement and host window limited
for data packets. However, for each connection T-RAT
sometimes identified a minority of the periodic determinations
as transport limited when it had access to the data
packets. With access to the acknowledgment packets, virtually
all of the periodic determinations were receiver window
limited. Thus, the advertised window information available
in acknowledgments made T-RAT's job easier.
Sender Window Limited: These experiments were identical
to the previous ones with the exception that in this case
it was the sender's maximum window that was adjusted
while the receiver window was larger than the bandwidth
delay product of the bottleneck link.
The results were very similar to the those in the receiver
window limited experiments. The tool was again unable to
identify flight sizes of 2 packets as sender window limited
(which in practice should not be a common occurrence.)
RAT was able to identify window sizes as large as 80-90%
of the link bandwidth as sender window limited. Beyond
that it had trouble di#erentiating the behavior from band-width
limited. Finally, as background load was applied to
the link, the tool still correctly identified the most common
rate limiting factor for each connection, though it sometimes
confused the behavior with transport limited.
Transport Limited: To test transport limited behavior,
in which the connection does congestion avoidance while not
experiencing loss, we set the bottleneck link bandwidth to
Mbps and the one-way propagation delay to 40 msec,
allowing a window size of more than 180 packets (recall
we used a 540 byte MTU). In addition, we set the initial
value of ssthresh to 2000 bytes, so that connections transitioned
from slow-start to congestion avoidance very quickly.
With no background tra#c, each connection transferred a 4
MByte file. Without delayed acknowledgments, the window
size reached about 140 packets (utilizing 75% of the link)
before the connection terminated. When we tested this behavior
in the presence of background load, each connection
transferred a 2.5 MByte file and achieved a maximum window
size of approximately 100 packets (without delayed ac-
knowledgments). The smaller file size was chosen to prevent
packet loss during the experiments. The experiments were
repeated 10 times for each set of parameters.
T-RAT successfully identified transport limited as the dominant
rate limiting cause for each connection. It made errors
in some of the periodic determinations, with the errors becoming
more prevalent as the burst size of the background
tra#c increased. Whenever T-RAT was unable to determine
the correct rate limiting behavior, its estimate of the
RTT was incorrect. However, correct RTT estimation is
not always necessary. In some cases, the tool was robust
enough to overcome errors in the RTT estimation and still
determine the proper rate limiting behavior. In assessing
transport limited behavior, T-RAT was more successful using
data packets than acknowledgment packets, particularly
when delayed acknowledgments were used. In contrast to
the receiver window limited case above, the acknowledgment
packets provide no additional information, and by acknowledging
only half of the packets, T-RAT has less information
with which to work.
Bandwidth Limited: In these experiments, a 10 MByte
file was transferred across the bottleneck link with no competing
tra#c. The router bu#er was large enough to avoid
packet loss, and the sender and receiver windows were large
enough to allow connections to saturate the link. We tested
bottleneck link bandwidths of 500 Kbps, 1.5 Mbps, and 10
Mbps, with and without delayed acknowledgments. Each
experiment was repeated 10 times.
In the vast majority of cases, T-RAT properly identified
the rate limiting behavior. There are two points to make
about these results. First, the RTT estimation produced
by the tool was often incorrect. For a connection that fully
saturates a bottleneck link, and is competing with no other
tra#c on that link, the resulting packet trace consists of
stream of evenly spaced packets. There is, therefore, little
or no timing information with which to accurately estimate
RTT. Nonetheless, the test for bandwidth limiting behavior
depends primarily on the distribution of inter-packet
times and not on proper estimation of the flight size, so the
tool still functions properly. The second observation about
these experiments is that the connections were not exclusively
bandwidth limited. Rather, they started in congestion
avoidance (ssthresh was again set to 2000 bytes) and opened
the congestion window, eventually saturating the link. The
tool identified the connections as initially transport limited,
and then as bandwidth limited once the bottleneck link was
saturated. Visual inspection of the traces revealed that the
tool made the transition at the appropriate time. In a few
instances, the tool was unable to make a rate limiting determination
during the single interval in which the connection
transitioned states, and deemed the rate limiting behavior
to be unknown.
Congestion Limited: Congestion limited behavior was
tested by transferring 5 MByte files across the bottleneck
link with random packet loss induced by dummynet. Tests
were repeated with both 2% and 5% loss on the link in a
single direction and in both directions. As with our other
experiments, we repeated tests with and without delayed
acknowledgments, and we repeated 5 transfers in each con-
figuration. 6
In nearly all cases, T-RAT identified these connections
as congestion limited across all loss rates, acknowledgment
strategies, and directionality of loss. For a very small number
of the periodic assessments, connections were deemed
transport limited. However, a connection that does not experience
any loss over some interval will be in congestion
avoidance mode and will be appropriately deemed transport
limited. Visual inspection of a sample of these instances
showed that this was indeed the case.
Opportunity Limited: In these experiments, we varied
the amount of data transferred by each connection from 1
to 100 packets. The connection sizes and link parameters
were such that the sources never left slow-start. However,
at the larger connection sizes, the congestion window was
large enough to saturate the link. Hence, while the source
remained in slow-start, this was not always obvious when
examining packet traces.
We first review the results without delayed acknowledg-
6 We also performed the more obvious experiment in which
multiple TCP connections were started simultaneously with
loss induced by the competing TCPs. However, an apparent
bug in the version of TCP we used sometimes prevented a
connection from ever opening its congestion window after
experiencing packet loss. Validating these results was more
di#cult since the TCP connections experienced a range of
rate limiting factors (congestion, host window, transport.)
Nonetheless, visual inspection of those results also indicated
that the tool was properly identifying cases of congestion.
ments. Using the trace of data packets at the source, T-RAT
correctly identified all of the connections as opportunity lim-
ited. In the other 3 traces, T-RAT identified between 83 and
88 of the connections as opportunity limited. Most of the
failures occurred at connection sizes greater than 80 pack-
ets, with a few occurring between 40 and packets. None
occurred for connection sizes less than 40 packets. When it
failed, T-RAT deemed the connections either transport or
bandwidth limited. These cases are not particularly trou-
bling, as the window sizes are larger than we would expect
to see with regularity in actual traces. With delayed ac-
knowledgments, T-RAT reached the right conclusion in 399
out of 400 cases, failing only for a single connection size with
acknowledgments at the receiver.
Application Limited: Characterizing and identifying
application limited tra#c is perhaps more challenging than
the other behaviors we study. The test T-RAT uses for application
limited tra#c is based on heuristics about packet
sizes and inter-packet gaps. However, there are certainly
scenarios that will cause the tool to fail. For example, an
application that sends constant bit rate tra#c in MSS-sized
packets will likely be identified as bandwidth limited. Fur-
ther, since this tra#c is by definition limited by the application
our tool needs to recognize a potentially wider range of
behaviors than with the other limiting factors. Understanding
the range of application limited tra#c in the Internet
remains a subject for future study.
In our e#ort to validate the current tests for application
limited tra#c in T-RAT we had the application generate application
data units (ADUs) at intervals separated by a random
idle times chosen from an exponential distribution. We
tested connections with average idle times of 1, 2, 3, 10, 20,
30, 50 and 100 msec. Furthermore, rather than generating
MSS-sized ADUs as in our other experiments, we chose the
size of the ADUs from a uniform distribution between 333
and 500 bytes, the latter being the MSS in our experiments.
The resulting application layer data generation rates would
have been between 3.3 Mbps (1 msec average idle time) and
kbps (100 msec idle time) without any network limits. In
our case (1.5 Mbps bottleneck bandwidth) the highest rates
would certainly run into network limits. Since we did not
use MSS-sized packets, the resulting network layer tra#c
depended on whether or not the TCP Nagle algorithm [12],
which coalesces smaller ADUs into MSS-sized packets, is
employed. Hence, in addition to repeating experiments with
and without delayed acknowledgments, we also repeated the
experiments with and without the Nagle algorithm turned
on.
Assessing the results of these experiments was di#cult.
Given that we used a stochastic data generation process,
and that one cannot know a priori how this random process
will interact with the transport layer, we could not know
what the resulting network tra#c would look like. Without
a detailed packet-by-packet examination, the best we can do
is to make qualitative characterizations about the results.
With the Nagle algorithm turned on, T-RAT characterized
the two fastest data generation rates (3.3 Mbps and
1.65 Mbps) as a combination of congestion and bandwidth
limited. This is what one would expect given the bottleneck
link bandwidth. At the lowest data rates (33 Kbps and 66
Kbps) T-RAT deemed the tra#c to be application limited.
This is again consistent with intuition. In between, (from
110 Kbps to 1.1 Mbps) the tra#c was characterized vari-
0Access1a Access1b Access1c Access2 Peering1 Regional1a Regional1b Regional2
Percentage
of
Bytes Congestion
Host/Sndr/Rcvr
Opportunity
Application
Transport
Unknown
Figure
10: Fraction of bytes for each rate limiting factor1030507090
Percentage
of
Flows Congestion
Host/Sndr/Rcvr
Opportunity
Application
Transport
Unknown
Figure
11: Fraction of flows for each rate limiting factor
ously as transport, host window, application, or unknown
limited.
With the Nagle algorithm disabled, the fastest generation
rates were again characterized as congestion and bandwidth
limited. At all the lower rates (1.1 Mbps down to 33 Kbps),
T-RAT deemed the connections as exclusively application
limited when using the data stream, and a combination of
application and transport limited when using the acknowledgment
stream. Thus, application limited behavior is easier
to discern when the Nagle algorithm is turned o#.
6. RESULTS
The results of applying T-RAT to the 8 packet traces are
shown in Figure 10. For each trace, the plot shows the
percentage of bytes limited by each factor. The 4 traces
taken from access links are able to di#erentiate between
sender and receiver limited flows since they see data and
acknowledgment packets for all connections. The peering
and regional traces, on the other hand, often only see one
direction of a connection and are therefore not always able
to di#erentiate between these two causes. We have aggregated
the 3 categories identified by T-RAT-sender, receiver
and host window limited-into a single category labeled
"Host/Sndr/Rcvr" limited in the graph.
As shown in Figure 10, the most common rate limiting
factor is congestion. It accounts for between 22% and 43%
of the bytes in each trace, and is either the first or second
most frequent cause in each trace. The aggregate category
that includes sender, receiver and host window limited is
the second most common cause of rate limits accounting
for between 8% and 48% of the bytes across traces. When
we were able to make a distinction between sender and receiver
window limited flow (i.e., when the trace captured
the acknowledgment stream), receiver window limited was
a much more prevalent cause than sender window limited,
by ratios between 2:1 and 10:1. Other causes-opportunity
Cumulative
Fraction
Flow Rate (Byte/sec)
opportunity limited
application limited
congestion limited
transport limited
receiver limited
Figure
12: Rate distribution by rate limiting fac-
tor, Access1b trace0.20.61
Cumulative
Fraction
Flow Size (bytes)
opportunity limited
application limited
congestion limited
transport limited
receiver limited
Figure
13: Size distribution by rate limiting fac-
tor, Access1b trace0.20.61
Cumulative
Fraction
Flow Duration (sec)
opportunity limited
application limited
congestion limited
transport limited
receiver limited
Figure
14: Duration distribution by rate limiting
trace
limited, application limited and transport limited-usually
accounted for less than 20% of the bytes. Bandwidth limited
flows accounted for less than 2% of the bytes in all
traces (and are not shown in the plot). For most traces,
the unknown category accounted for a very small percentage
of the bytes. We examined the 2 traces in which the
rate limiting cause for more than 5% of the bytes was unknown
and identified 3 factors that prevented the tool from
making a rate limiting determination. First, T-RAT cannot
accurately estimate round trips on the order of 3 msec
or less, and therefore, cannot determine a rate limiting factor
for these connections. Second, when the traces were
missing packets in the middle of a connection (which may
have resulted either from loss at the packet filter or from
multi-path routing) estimating the round trip time and the
rate limiting cause becomes di#cult. Finally, multiple web
transfers across a persistent TCP connection also presented
problems. When one HTTP transfer uses the congestion
window remaining from the end of the previous transfer a
moderate size file may not be opportunity limited (because
it is larger than 13 packets and it never enters slow-start)
and it may not have enough flights (because the initial flight
size is large) for T-RAT to make a rate limit determination.
Not surprisingly, when we look at rate limiting factors by
flows rather than bytes, the results are very di#erent. Recall
that we continuously update the reason a flow is limited, and
a single flow may have multiple limiting factors throughout
its lifetime. For example, it may be congestion limited for
one interval, and after congestion dissipates, become window
limited. In those cases when a flow experienced multiple
causes, we classified it by the factor that most often
limited its transmission rate. Figure 11 shows the percentage
of flows constrained by each rate limiting factor for the 8
traces. The most common per-flow factors are opportunity
and application limited. Collectively, they account for over
90% of the flows in each of the 8 traces, with opportunity
limited accounting for more than 60% and application limited
accounting for between 11% and 34%. No other cause
accounted for more than 4% of the flows in any trace. These
results are consistent with the results reported in Section 3.
Namely, most flows are small and slow. Small flows are
likely opportunity limited (they don't have enough packets
to test bu#er or network limits), and slow flows are likely
application limited (not sending fast enough to test bu#er
or network limits.)
A general trend is evident when comparing the traces
taken at access links to those taken at the peering and regional
links. The former tend to have a higher percentage
of bytes that are window limited. The access links are high
speed links connecting a site to the Internet. As such, they
support a population with good connectivity to the Internet.
The other links are likely seeing a more general cross section
of Internet users, some of whom are well-connected and others
of whom are not. Since window limits are reached when
the bandwidth delay product exceeds bu#er resources, the
well-connected users are more likely to reach these limits.
This di#erence between the two kinds of traces was evident
in
Figure
1. That graph shows that the distribution of rates
has a longer tail for the access links than for the regional
and peering links.
We next ask whether these di#erent rate limiting factors
can be associated with di#erent performance for users. Figure
12 plots the CDF of the rates for each of the rate limiting
factors for the Access1b trace. The graph shows very distinct
di#erences between subgroups. Overall, receiver limited
and transport limited flows have the largest average
rates, followed by congestion limited, application limited
and opportunity limited. This same trend was exhibited
across the other 7 traces. Figures 13 and 14 plot the distributions
of size and duration for each rate limiting factor in
the Access1b trace. Receiver limited flows have the largest
size distribution, followed by transport and congestion lim-
ited. In the duration distribution, congestion limited flows
have the longest duration, which is consistent with the observation
that flows experiencing congestion will take longer
to transmit their data than flows not experiencing congestion
7. CONCLUSION
The rates at which flows transmit data is an important
and not well-understood phenomenon. The rate of a flow
can have a major impact on user experience, and the rates
of flows traversing the Internet can have a significant e#ect
on network control algorithms. We had two goals in this
paper. First, we wanted to better understand the characteristics
of flow rates in the Internet. Using packet traces
and summary flow statistics we examined the rates of flows
and the relationship between flow rates and other flow char-
acteristics. We found that fast flows are responsible for most
of the bytes transmitted in the Internet, so understanding
their behavior is important. We also found a strong correlation
between flow rate and size, suggesting an interaction
between bandwidth available to a user and what the user
does with that bandwidth.
Our second goal was to provide an explanation of the reasons
why flows transmit at the rates they do. This was
an ambitious goal, and we have by no means completely
answered this question. We have seen, for a set of Internet
packet traces, the reasons that flows are limited in their
transmission rates and have looked at di#erences among different
categories of flows. We believe our main contribution,
however, is to open up an area of investigation that can lead
to valuable future research. The tool we developed to study
rate limiting behavior provides a level of analysis of TCP
connections that can answer previously unanswerable ques-
tions. Thus, our tool has applicability beyond the set of
results we have obtained with it thus far.
Acknowledgments
We would like to thank Rui Zhang for her work with us on
flow characterization that helped launch this project. We
would also like to express our thanks to the anonymous reviewers
for many helpful comments on this paper.
8.
--R
"A Web Server's View of the Transport Layer,"
"Analyzing Stability in Wide-Area Network Performance,"
"Graphical Methods for Data Analysis,"
"Self-similarity in World Wide Web Tra#c: Evidence and Possible Causes,"
"Goodness-of-Fit Techniques,"
"Passive Estimation of TCP Round-Trip Times,"
"Internet Routing Instability,"
"On the Self-Similar Nature of Ethernet Tra#c (Extended Version),"
"Dynamics of Random Early Detection,"
"Controlling High-Bandwidth Flows at the Congested Router,"
"Analysis of Internet Delay Times,"
"Congestion Control in IP/TCP Internetworks"
"Approximate Fairness through Di#erential Dropping,"
"Automated Packet Trace Analysis of TCP Implementations,"
"Wide-Area Tra#c: The Failure of Poisson Modeling,"
"End-to-End Routing Behavior in the Internet,"
"End-to-End Internet Packet Dynamics,"
"Dummynet: A Simple Approach to the Evaluation of Network Protocols"
"Connection-level Analysis and Modeling of Network Tra#c,"
"Wide Area Internet Tra#c Patterns and Characteristics,"
--TR
area traffic
Dummynet
Automated packet trace analysis of TCP implementations
End-to-end routing behavior in the Internet
Self-similarity in World Wide Web traffic
Internet routing instability
End-to-end internet packet dynamics
Connection-level analysis and modeling of network traffic
A web server''s view of the transport layer
Passive estimation of TCP round-trip times
Controlling High-Bandwidth Flows at the Congested Router
--CTR
Jrg Wallerich , Holger Dreger , Anja Feldmann , Balachander Krishnamurthy , Walter Willinger, A methodology for studying persistency aspects of internet flows, ACM SIGCOMM Computer Communication Review, v.35 n.2, April 2005
David Watson , G. Robert Malan , Farnam Jahanian, An extensible probe architecture for network protocol performance measurement, SoftwarePractice & Experience, v.34 n.1, p.47-67, January 2004
Amogh Dhamdhere , Constantine Dovrolis, Open issues in router buffer sizing, ACM SIGCOMM Computer Communication Review, v.36 n.1, January 2006
Ming-zhong , Gong Jian , Ding Wei, Study of Dynamic Timeout Strategy based on flow rate metrics in high-speed networks, Proceedings of the 1st international conference on Scalable information systems, p.5-es, May 30-June 01, 2006, Hong Kong
Shriram Sarvotham , Rudolf Riedi , Richard Baraniuk, Network and user driven alpha-beta on-off source model for network traffic, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.48 n.3, p.335-350, 21 June 2005
James Hall , Andrew Moore , Ian Pratt , Ian Leslie, Multi-protocol visualization: a tool demonstration, Proceedings of the ACM SIGCOMM workshop on Models, methods and tools for reproducible network research, August 25-27, 2003, Karlsruhe, Germany
Matthew Roughan, Fundamental bounds on the accuracy of network performance measurements, ACM SIGMETRICS Performance Evaluation Review, v.33 n.1, June 2005
A. Kortebi , L. Muscariello , S. Oueslati , J. Roberts, Evaluating the number of active flows in a scheduler realizing fair statistical bandwidth sharing, ACM SIGMETRICS Performance Evaluation Review, v.33 n.1, June 2005
Tatsuya Mori , Masato Uchida , Ryoichi Kawahara , Jianping Pan , Shigeki Goto, Identifying elephant flows through periodically sampled packets, Proceedings of the 4th ACM SIGCOMM conference on Internet measurement, October 25-27, 2004, Taormina, Sicily, Italy
Lukas Kencl , Christian Schwarzer, Traffic-Adaptive Packet Filtering of Denial of Service Attacks, Proceedings of the 2006 International Symposium on on World of Wireless, Mobile and Multimedia Networks, p.485-489, June 26-29, 2006
M. Siekkinen , G. Urvoy-Keller , E. W. Biersack , T. En-Najjary, Root cause analysis for long-lived TCP connections, Proceedings of the 2005 ACM conference on Emerging network experiment and technology, October 24-27, 2005, Toulouse, France
Guohan Lu , Xing Li, On the correspondency between TCP acknowledgment packet and data packet, Proceedings of the 3rd ACM SIGCOMM conference on Internet measurement, October 27-29, 2003, Miami Beach, FL, USA
Arifler , Gustavo de Veciana , Brian L. Evans, A factor analytic approach to inferring congestion sharing based on flow level measurements, IEEE/ACM Transactions on Networking (TON), v.15 n.1, p.67-79, February 2007
Abdesselem Kortebi , Luca Muscariello , Sara Oueslati , James Roberts, Minimizing the overhead in implementing flow-aware networking, Proceedings of the 2005 symposium on Architecture for networking and communications systems, October 26-28, 2005, Princeton, NJ, USA
Salvatore Gaglio , Luca Gatani , Giuseppe Re , Alfonso Urso, A Logical Architecture for Active Network Management, Journal of Network and Systems Management, v.14 n.1, p.127-146, March 2006
Allen B. Downey, TCP self-clocking and bandwidth sharing, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.51 n.13, p.3844-3863, September, 2007
DongJin Lee , Nevil Brownlee, Passive measurement of one-way and two-way flow lifetimes, ACM SIGCOMM Computer Communication Review, v.37 n.3, July 2007
Daniel Zaragoza , Carlos Belo, Experimental validation of the ON-OFF packet-level model for IP traffic, Computer Communications, v.30 n.5, p.975-989, March, 2007
Mahajan , Maya Rodrig , David Wetherall , John Zahorjan, Analyzing the MAC-level behavior of wireless networks in the wild, ACM SIGCOMM Computer Communication Review, v.36 n.4, October 2006
Mariyam Mirza , Joel Sommers , Paul Barford , Xiaojin Zhu, A machine learning approach to TCP throughput prediction, ACM SIGMETRICS Performance Evaluation Review, v.35 n.1, June 2007
Stergios V. Anastasiadis , Rajiv G. Wickremesinghe , Jeffrey S. Chase, Circus: Opportunistic Block Reordering for Scalable Content Servers, Proceedings of the 3rd USENIX Conference on File and Storage Technologies, March 31-31, 2004, San Francisco, CA
Rob Sherwood , Neil Spring, Touring the internet in a TCP sidecar, Proceedings of the 6th ACM SIGCOMM on Internet measurement, October 25-27, 2006, Rio de Janeriro, Brazil
Kun-chan Lan , John Heidemann, A measurement study of correlations of internet flow characteristics, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.50 n.1, p.46-62, January 2006
Srikanth Kandula , Dina Katabi , Shantanu Sinha , Arthur Berger, Dynamic load balancing without packet reordering, ACM SIGCOMM Computer Communication Review, v.37 n.2, April 2007
Aditya Akella , Srinivasan Seshan , Anees Shaikh, An empirical evaluation of wide-area internet bottlenecks, Proceedings of the 3rd ACM SIGCOMM conference on Internet measurement, October 27-29, 2003, Miami Beach, FL, USA
Jayanthkumar Kannan , Jaeyeon Jung , Vern Paxson , Can Emre Koksal, Semi-automated discovery of application session structure, Proceedings of the 6th ACM SIGCOMM on Internet measurement, October 25-27, 2006, Rio de Janeriro, Brazil
Atul Adya , Paramvir Bahl , Ranveer Chandra , Lili Qiu, Architecture and techniques for diagnosing faults in IEEE 802.11 infrastructure networks, Proceedings of the 10th annual international conference on Mobile computing and networking, September 26-October 01, 2004, Philadelphia, PA, USA
Lukas Kencl , Jean-Yves Le Boudec, Adaptive load sharing for network processors, IEEE/ACM Transactions on Networking (TON), v.16 n.2, p.293-306, April 2008
Kashi Venkatesh Vishwanath , Amin Vahdat, Realistic and responsive network traffic generation, ACM SIGCOMM Computer Communication Review, v.36 n.4, October 2006
Eddie Kohler , Mark Handley , Sally Floyd, Designing DCCP: congestion control without reliability, ACM SIGCOMM Computer Communication Review, v.36 n.4, October 2006
Mahajan , Neil Spring , David Wetherall , Thomas Anderson, User-level internet path diagnosis, Proceedings of the nineteenth ACM symposium on Operating systems principles, October 19-22, 2003, Bolton Landing, NY, USA | flow rates;TCP;network measurement |
633558 | Quantum communication and complexity. | In the setting of communication complexity, two distributed parties want to compute a function depending on both their inputs, using as little communication as possible. The required communication can sometimes be significantly lowered if we allow the parties the use of quantum communication. We survey the main results of the young area of quantum communication its relation to teleportation and dense coding, the main examples of fast quantum communication protocols, lower bounds, and some applications. | Introduction
The area of communication complexity deals with the following type of prob-
lem. There are two separated parties, called Alice and Bob. Alice receives some
input x 2 X, Bob receives some y 2 Y , and together they want to compute
some function f(x; y). As the value f(x; y) will generally depend on both x
and y, neither Alice nor Bob will have sufficient information to do the computation
by themselves, so they will have to communicate in order to achieve
their goal. In this model, individual computation is free, but communication
is expensive and has to be minimized. How many bits do they need to communicate
between them in order to solve this? Clearly, Alice can just send
her complete input to Bob, but sometimes more efficient schemes are possible.
This model was introduced by Yao [52] and has been studied extensively, both
for its applications (like lower bounds on VLSI and circuits) and for its own
sake. We refer to [38,32] for definitions and results.
An interesting variant of the above is quantum communication complexity:
suppose that Alice and Bob each have a quantum computer at their disposal
and are allowed to exchange quantum bits (qubits) and/or to make use
Partially supported by the EU fifth framework project QAIP, IST-1999-11234.
Preprint submitted to Elsevier Preprint 21 October 2000
of the quantum correlations given by shared EPR-pairs (entangled pairs of
qubits named after Einstein, Podolsky, and Rosen [27]). Can Alice and Bob
now compute f with less communication than in the classical case? Quantum
communication complexity was first considered by Yao [53] for the model with
qubit communication and no prior EPR-pairs, and it was shown later that for
some problems the amount of communication required in the quantum world
is indeed considerably less than the amount of classical communication.
In this survey, we first give brief explanations of quantum computation and
communication, and then cover the main results of quantum communication
complexity: upper bounds (Section 5), lower bounds (Section 6), and applications
(Section 7). We include proofs of some of the central results and references
to others. Some other recent surveys of quantum communication complexity
are [48,15,35], and a more popular account can be found in [47]. Our
survey differs from these in being a bit more extensive and up to date.
Quantum Computation
In this section we briefly give the relevant background from quantum compu-
tation, referring to the book of Nielsen and Chuang [44] for more details.
2.1 States and operations
The classical unit of computation is a bit, which can take on the values 0 or
1. In the quantum case, the unit of computation is a qubit which is a linear
combination or superposition of the two classical values:
More generally, an m-qubit state jOEi is a superposition of all 2 m different
classical m-bit strings:
The classical state jii is called a basis state. The coefficient ff i is a complex
number, which is called the amplitude of jii. The amplitudes form a
dimensional complex vector, which we require to have norm 1 (i.e.
1). If some system is in state jOEi and some other is in state j/i, then their
joint state is the tensor product
We can basically do two things to a quantum state: measure it or perform a
unitary operation to it. If we measure jOEi, then we will see a basis state; we
will see jii with probability jff . Since the numbers jff induce a probability
distribution on the set of basis states they must sum to 1, which they indeed
do because jOEi has norm 1. A measurement "collapses" the measured state to
the measurement outcome: if we see jii, then jOEi has collapsed to jii, and all
other information in jOEi is gone.
Apart from measuring, we can also transform the state, i.e., change the am-
plitudes. Quantum mechanics stipulates that this transformation U must be
a linear transformation on the 2 m -dimensional vector of amplitudes:
ff 0:::0
Since the new vector of amplitudes fi i must also have norm 1, it follows that the
linear transformation U must be norm-preserving and hence unitary. 2 This
in turn implies that U has an inverse (in fact equal to its conjugate transpose
U ), hence non-measuring quantum operations are reversible.
2.2 Quantum algorithms
We describe quantum algorithms in the quantum circuit model [25,53], rather
than the somewhat more cumbersome quantum Turing machine model [24,12].
A classical Boolean circuit is a directed acyclic graph of elementary Boolean
gates (usually AND, OR, and NOT), only acting on one or two bits at a time.
It transforms an initial vector of bits (containing the input) into the output. A
quantum circuit is similar, except that the classical Boolean gates now become
elementary quantum gates. Such a gate is a unitary transformation acting only
on one or two qubits, and implicitly acting as the identity on the other qubits
of the state. A simple example of a 1-qubit gate is the Hadamard transform,
which maps basis state jbi to 1
In matrix form, this is
An example of a 2-qubit gate is the controlled-NOT (CNOT) gate, which
negates the second bit of the state depending on the first bit: jc; bi ! jc; b \Phi ci.
Both quantum measurements and quantum operations allow for a somewhat
more general description than given here (POVMs and superoperators, respectively,
see [44]), but the above definitions suffice for our purposes.
In matrix form, this is
It is known that the set of gates consisting of CNOT and all 1-qubit gates is
universal, meaning that any other unitary transformation can be written as a
product of gates from this set. We refer to [4,44] for more details.
The product of all elementary gates in a quantum circuit is a big unitary
transformation which transforms the initial state (usually a classical bitstring
containing the input x) into a final superposition. The output of the circuit is
then the outcome of measuring some dedicated part of the final state. We say
that a quantum circuit computes some function f : f0; 1g n ! Z exactly if it
always outputs the right value f(x) on input x. The circuit computes f with
bounded error if it outputs f(x) with probability at least 2=3, for all x. Notice
that a quantum circuit involves only one measurement; this is without loss of
generality, since it is known that measurements can always be pushed to the
end at the cost of a moderate amount of extra memory.
The complexity of a quantum circuit is usually measured by the number of
elementary gates it contains. A circuit is deemed efficient if its complexity is
at most polynomial in the length n of the input. The most spectacular instance
of an efficient quantum circuit (rather, a uniform family of such circuits, one
for each n) is still Shor's 1994 efficient algorithm for finding factors of large
integers. It finds a factor of arbitrary n-bit numbers with high probability
using only n 2 polylog(n) elementary gates. This compromises the security of
modern public-key cryptographic systems like RSA, which are based on the
assumed hardness of factoring.
2.3 Query algorithms
A type of quantum algorithms that we will refer to later are the query algo-
rithms. In fact, most existing quantum algorithms are of this type. Here the
input is not part of the initial state, but encoded in a special "black box"
quantum gate. The black box maps basis state ji; bi to ji; b \Phi x i i, thus giving
access to the bits x i of the input. Note that a quantum algorithm can run the
black box on a superposition of basis states, gaining access to several input
bits x i at the same time. One such application of the black box is called a
query. The complexity of a quantum circuit for computing some function f is
now the number of queries we need on the worst-case input; we don't count
the complexity of other operations in this model. In the classical world, this
query complexity is known as the decision tree complexity of f .
A simple but illustrative example is the Deutsch-Jozsa algorithm [26,23]: suppose
we get the promise that the input x 2 f0; 1g n is either or has exactly
n=2 0s and n=2 1s. Define in the first case and
in the second. It is easy to see that a deterministic classical computer needs
queries for this (if the computer has queried n=2 bits and they are
all 0, the function value is still undetermined). On the other hand, here is a
1-query quantum algorithm for this problem:
(1) Start in a basis state zeroes followed by a 1
(2) Apply a Hadamard transform to each of the
(3) Query the black box once
Apply a Hadamard transform to the first n qubits
(5) Measure the first n qubits, output 1 if the observed state is
output 0 otherwise
By following the state through these steps, it may be verified that the algorithm
outputs 1 if the input is the input is balanced.
Another important quantum query algorithm is Grover's search algorithm [30],
which finds an i such that x such an i exists in the n-bit input. It
has error probability 1=3 on each input and uses O(
n) queries, which is
optimal [10,13,54]. Note that the algorithm can also be viewed as computing
the OR-function: it can determine whether at least one of the input bits is 1.
3 Quantum Communication
The area of quantum information theory deals with the properties of quantum
information and its communication between different parties. We refer
to [11,44] for general surveys, and will here restrict ourselves to explaining
two important primitives: teleportation [8] and superdense coding [9]. These
pre-date quantum communication complexity and show some of the power of
quantum communication.
We first show how teleporting a qubit works. Alice has a qubit ff 0
that she wants to send to Bob via a classical channel. Without further resources
this would be impossible, but Alice also shares an EPR-pair 1
j11i) with Bob. Initially, their joint state is
The first two qubits belong to Alice, the third to Bob. Alice performs a CNOT
on her two qubits and then a Hadamard transform on her first qubit. Their
joint state can now be written as2
Alice
-z
Alice then measures her two qubits and sends the result (2 random classical
bits) to Bob, who now knows which transformation he must do on his qubit
in order to regain the qubit ff 0
j1i. For instance, if Alice sent 11 then
Bob knows that his qubit is ff 0
j0i. A bit-flip (jbi ! j1 \Gamma bi) followed by
a phase-flip (jbi ! (\Gamma1) b jbi) will give him Alice's original qubit ff 0
j1i.
Note that the qubit on Alice's side has been destroyed: teleporting moves a
qubit from A to B, rather than copying it. In fact, copying an unknown qubit is
impossible [50], which can be seen as follows. Suppose C were a 1-qubit copier,
i.e. for every qubit jOEi. In particular
would not copy
p(j0i+j1i) correctly, since
by linearity
In teleportation, Alice uses 2 classical bits and 1 EPR-pair to send 1 qubit to
Bob. Superdense coding achieves the opposite: using 1 qubit and 1 EPR-pair,
Alice can send 2 classical bits b 2
to Bob. It works as follows. Initially they
share an EPR-pair 1
Alice applies a phase-
flip to her half of the pair. Second, if b 2
applies a bit-flip. Third, she
sends her half of the EPR-pair to Bob, who now has one of 4 states jOE b2 b1 i:
Since these states are orthogonal, Bob can apply a unitary transformation
i and thus learns b 2
Suppose Alice wants to send n classical bits of information to Bob and they do
not share any prior entanglement. Alice can just send her n bits to Bob, but,
alternatively, Bob can also first send n=2 halves of EPR-pairs to Alice and
then Alice can send n bits in n=2 qubits using dense coding. In either case, n
qubits are exchanged between them. If Alice and Bob already share n=2 prior
EPR-pairs, then n=2 qubits suffice by superdense coding. The following result
shows that this is optimal. We will refer to it as Holevo's theorem, because
the first part is an immediate consequence of a result of [31] (the second part
was derived in [22]).
Theorem 1 (Holevo) If Alice wants to send n bits of information to Bob
via a qubit channel, and they don't share prior entanglement, then they have
to exchange at least n qubits. If they do share unlimited prior entanglement,
then Alice has to send at least n=2 qubits to Bob, no matter how many qubits
Bob sends to Alice.
A somewhat stronger and more subtle variant of this lower bound was recently
derived by Nayak [40], improving upon [1]. Suppose that Alice doesn't want
to send Bob all of her n bits, but just wants to send a message which allows
Bob to learn one of her bits x i , where Bob can choose i after the message has
been sent. Even for this weaker form of communication, Alice has to send an
n)-qubit message.
4 Quantum Communication Complexity: The Model
First we sketch the setting for classical communication complexity, referring
to [38,32] for more details. Alice and Bob want to compute some function
. If the domain D equals X \Theta Y then
f is called a total function, otherwise it is a promise function. Alice receives
input x 2 X, Bob receives input y 2 Y , with (x; y) 2 D. As the value f(x; y)
will generally depend on both x and y, some communication between Alice
and Bob is required in order for them to be able to compute f(x; y). We are
interested in the minimal amount of communication they need.
A communication protocol is a distributed algorithm where first Alice does
some individual computation, and then sends a message (of one or more bits)
to Bob, then Bob does some computation and sends a message to Alice, etc.
Each message is called a round. After one or more rounds the protocol terminates
and outputs some value, which must be known to both players. The
cost of a protocol is the total number of bits communicated on the worst-case
input. A deterministic protocol for f always has to output the right value
f(x; y) for all (x; y) 2 D. In a bounded-error protocol, Alice and Bob may flip
coins and the protocol has to output the right value f(x; y) with probability
2=3 for all (x; y) 2 D. We use D(f) and R 2
(f) to denote the minimal cost of
deterministic and bounded-error protocols for f , respectively. The subscript
'2' in R 2
(f) stands for 2-sided bounded error. For R 2
(f) we can either allow
Alice and Bob to toss coins individually (private coin) or jointly (public coin).
This makes not much difference: a public coin can save at most O(log n) bits
of communication [42], compared to a protocol with a private coin.
Some often studied total functions where
ffl Equality: EQ(x;
ffl Inner product: IP(x;
(for is the ith bit of x and x " y 2 f0; 1g n is the bit-wise
AND of x and y)
ffl Disjointness: DISJ(x; This function is 1 iff there is no i
(viewing x and y as characteristic vectors of sets, the sets
are disjoint)
It is known that
n). However, R 2
(EQ) is only O(1), as follows. Alice and Bob jointly toss a
random string r 2 f0; 1g n . Alice sends the bit a to Bob (where '\Delta' is
inner product mod 2). Bob computes and compares this with a. If
then a 6= b with probability 1/2. Thus Alice
and Bob can decide equality with small error using O(n) public coin flips and
O(1) communication. Since public coin and private coin protocols are close,
this also implies that R 2
n) with a private coin.
Now what happens if we give Alice and Bob a quantum computer and allow
them to send each other qubits and/or to make use of EPR-pairs which they
share at the start of the protocol? Formally speaking, we can model a quantum
protocol as follows. The total state consists of 3 parts: Alice's private space,
the channel, and Bob's private space. The starting state is jxij0ijyi: Alice
gets x, the channel is initially empty, and Bob gets y. Now Alice applies a
unitary transformation to her space and the channel. This corresponds to
her private computation as well as to putting a message on the channel (the
length of this message is the number of channel-qubits affected by Alice's
operation). Then Bob applies a unitary transformation to his space and the
channel, etc. At the end of the protocol Alice or Bob makes a measurement
to determine the output of the protocol. We use Q(f) to denote the minimal
communication cost of a quantum protocol that computes f(x; y) exactly (=
with error probability 0). This model was introduced by Yao [53]. In the
second model, introduced by Cleve and Buhrman [21], Alice and Bob share
an unlimited number of EPR-pairs at the start of the protocol, but now they
communicate via a classical channel: the channel has to be in a classical state
throughout the protocol. We use C (f) for the minimal complexity of an exact
protocol for f in this model. Note that we only count the communication, not
the number of EPR-pairs used. The third variant combines the strengths of the
other two: here Alice and Bob start out with an unlimited number of shared
EPR-pairs and they are allowed to communicate qubits. We use Q (f) to
denote the communication complexity in this third model. By teleportation, 1
EPR-pair and 2 classical bits can replace 1 qubit of communication, so we have
bounded-error quantum protocols. Note that a shared EPR-pair can simulate
a public coin toss: if Alice and Bob each measure their half of the pair, they
get the same random bit.
Before continuing to study this model, we first have to face an important ques-
tion: is there anything to be gained here? At first sight, the following argument
seems to rule out any significant gain. By definition, in the classical world D(f)
bits have to be communicated in order to compute f . Since Holevo's theorem
says that k qubits cannot contain more information than k classical bits, it
seems that the quantum communication complexity should be roughly D(f)
qubits as well (maybe D(f)=2 to account for superdense coding, but not less).
Fortunately and surprisingly, this argument is false, and quantum communication
can sometimes be much less than classical communication complexity.
The information-theoretic argument via Holevo's theorem fails, because Alice
and Bob do not need to communicate the information in the D(f) bits of the
classical protocol; they are only interested in the value f(x; y), which is just
bit. Below we will survey the main examples that have so far been found of
gaps between quantum and classical communication complexity.
5 Quantum Communication Complexity: Upper bounds
5.1 Initial steps
Quantum communication complexity was introduced by Yao [53] and studied
by Kremer [37], but neither showed any advantages of quantum over classical
communication. Cleve and Buhrman [21] introduced the variant with classical
communication and shared EPR-pairs, and exhibited the first quantum protocol
provably better than any classical protocol. It uses quantum entanglement
to save 1 bit of classical communication. This gap was extended by Buhrman,
Cleve, and van Dam [16] and, for more than 2 parties, by Buhrman, van Dam,
Hyer, and Tapp [19].
5.2 Buhrman, Cleve, Wigderson
The first impressively large gaps between quantum and classical communication
complexity were exhibited by Buhrman, Cleve, and Wigderson [17].
Their protocols are distributed versions of known quantum query algorithms,
like the Deutsch-Jozsa and Grover algorithms. The following lemma shows
how a query algorithm induces a communication protocol:
binary connective (for instance \Phi or "). If
there is a T -query quantum algorithm for g, then there is a protocol for f that
communicates T (2 log n+4) qubits (and uses no prior entanglement) and that
has the same error probability as the query algorithm.
Proof The quantum protocol consists of Alice's simulating the quantum
query algorithm A on input x?y. Every query in A will correspond to 2 rounds
of communication. Namely, suppose Alice at some point wants to apply a query
to the state
i;b ff ib ji; bi (for simplicity we omit Alice's workspace).
Then she adds a j0i-qubit to the state, applies the unitary mapping ji; b; 0i !
sends the resulting state to Bob. Bob now applies the unitary
mapping ji; b; x sends the result back to Alice. Alice
applies ji; b; x takes off the last qubit, and ends up with the state
which is exactly the result of applying an x ? y-query
to jOEi. Thus every query to x ? y can be simulated using 2 log
of communication. The final quantum protocol will have T (2 log n
of communication and computes f(x; y) with the same error probability as A
has on input x ? y. 2
Now consider the disjointness function: DISJ(x;
Grover's algorithm can compute the NOR of n variables with O(
n) queries
with bounded-error, the previous lemma implies a bounded-error protocol for
disjointness with O(
log n) qubits. On the other hand, the linear lower
bound for disjointness is a well-known result of classical communication complexity
[33,46]. Thus we obtain the following near-quadratic separation:
Theorem 2 (Buhrman, Cleve, Wigderson)
log n) and R 2
Another separation is given by a distributed version of the Deutsch-Jozsa
problem of Section 2.3: define EQ 0 (x; This is a promise
version of equality, where the promise is that x and y are either equal or are
at Hamming distance n=2. Since there is an exact 1-query quantum algorithm
for In contrast, Buhrman, Cleve,
and Wigderson use a combinatorial result of Frankl and Rodl [29] to prove the
classical lower bound D(EQ 0 n). Thus we have the following exponential
separation for exact protocols:
Theorem 3 (Buhrman, Cleve, Wigderson)
5.3 Raz
Notice the contrast between the two separations of the previous section. For
the Deutsch-Jozsa problem we get an exponential quantum-classical separa-
tion, but the separation only holds if we force the classical protocol to be exact;
it is easy to see that O(log n) bits are sufficient if we allow some error (the
classical protocol can just try a few random positions i and check if x
not). On the other hand, the gap for the disjointness function is only quadratic,
but it holds even if we allow classical protocols to have some error probability.
Ran Raz [45] has exhibited a function where the quantum-classical separation
has both features: the quantum protocol is exponentially better than the classical
protocol, even if the latter is allowed some error probability. Consider
the following promise problem P:
Alice receives a unit vector v 2 R m and a decomposition of the corresponding
space in two orthogonal subspaces H (0) and H (1) . Bob receives an m \Theta m
unitary transformation U . Promise: Uv is either "close" to H (0) or to H (1) .
Question: which of the two?
As stated, this is a problem with continuous input, but it can be discretized
in a natural way by approximating each real number by O(log m) bits. Alice
and Bob's input is now log m) bits long. There is a simple yet
efficient 2-round quantum protocol for this problem: Alice views v as a log m-
qubit vector and sends this to Bob. Bob applies U and sends back the result.
Alice then measures in which subspace H (i) the vector Uv lies and outputs
the resulting i. This takes only qubits of communication.
The efficiency of this protocol comes from the fact that an m-dimensional
vector can be "compressed" or "represented" as a log m-qubit state. Similar
compression is not possible with classical bits, which suggests that any classical
protocol for P will have to send the vector v more or less literally and hence
will require a lot of communication. This turns out to be true but the proof
(given in [45]) is surprisingly hard. The result is the first exponential gap
between
and R 2
Theorem 4 (Raz) Q 2
n) and R 2
6 Quantum Communication Complexity: Lower Bounds
In the previous section we exhibited some of the power of quantum communication
complexity. Here we will look at its limitations, first for exact protocols
and then for the bounded-error case.
6.1 Lower bounds on exact protocols
Quite good lower bounds are known for exact quantum protocols for total
functions. For a total function
the communication matrix of f . This is an jXj \Theta jY j Boolean matrix which
completely describes f . Let rank(f) denote the rank of M f over the reals.
Mehlhorn and Schmidt [39] proved that D(f) log rank(f ), which is the
main source of lower bounds on D(f ). For Q(f) a similar lower bound follows
from techniques of Yao and Kremer [53,37], as first observed in [17]. This
bound was later extended to the case where Alice and Bob share unlimited
prior entanglement by Buhrman and de Wolf [20]. Their result turned out to
be equivalent to a result in Nielsen's thesis [43, Section 6.4.2]. The result is:
Theorem 5 Q (f) log rank(f)=2 and C (f) log rank(f).
Hence quantum communication complexity in the exact model is maximal
whenever M f has full rank, which it does for almost all functions, including
equality, (the complement of) inner product, and disjointness.
How tight is the log rank(f) lower bound? It has been conjectured that D(f)
(log rank(f)) O(1) for all total functions, in which case log rank(f) would characterize
D(f) up to polynomial factors. If this log-rank conjecture is true, then
Theorem 5 implies that Q (f) and D(f) are polynomially close for all total
f , since then Q (f) D(f) (log rank(f)) O(1) (2Q (f)) O(1) . Some small
classes of functions where this provably holds are identified in [20]. It should
be noted that, in fact, no total f is known where Q (f) is more than a factor
of 2 smaller than D(f) (the factor of 2 can be achieved by superdense coding).
6.2 Lower bounds on bounded-error protocols
The previous section showed some strong lower bounds for exact quantum
protocols. The situation is much worse in the case of bounded-error protocols,
for which very few good lower bounds are known. One of the few general
lower bound techniques known to hold for bounded-error quantum complexity
(without prior entanglement), is the so-called "discrepancy method". This was
shown by Kremer [37], who used it to derive an \Omega\Gamma n) lower bound for Q 2
(IP).
Cleve, van Dam, Nielsen, and Tapp [22] later independently proved such a
lower bound for Q (IP).
We will sketch the very elegant proof of [22] here for the case of exact protocols;
for bounded-error protocols it is similar but more technical. The proof uses
the IP-protocol to communicate Alice's n-bit input to Bob, and then invokes
Holevo's theorem to conclude that many qubits must have been communicated
in order to achieve this. Suppose Alice and Bob have some protocol for IP.
They can use this to compute the following mapping:
Now suppose Alice starts with an arbitrary n-bit state jxi and Bob starts with
the uniform superposition 1
they apply the above mapping,
the final state becomes
x\Deltay jyi:
If Bob now applies a Hadamard transform to each of his n qubits, then he
obtains the basis state jxi, so Alice's n classical bits have been communicated
to Bob. Theorem 1 now implies that the IP-protocol must
qubits, even if Alice and Bob share unlimited prior entanglement.
The above proof works for IP, but does not easily yield good bounds in general.
Neither does the discrepancy method, or an approximate version of the rank
lower bound which was noted in [17]. New lower bound techniques for quantum
communication are required. Of particular interest is whether the upper bound
log n) of [17] is tight. Because disjointness can be reduced
to many other problems (it is in fact "coNP-complete" [3]), a good lower
bound for disjointness would imply many other lower bounds as well. The
best known lower bound is only \Omega\Gammaly/ n), which was proven for
(DISJ)
in [2] (this also follows from discrepancy) and for Q (DISJ) in [20]. Buhrman
and de Wolf [20] have a translation of the approximate rank lower bound to
properties of polynomials which might prove an \Omega\Gamma
n) bound for disjointness,
but a crucial link is still missing in their approach.
7 Quantum Communication Complexity: Applications
The main applications of classical communication complexity have been in
proving lower bounds for various models like VLSI, Boolean circuits, formula
size, Turing machine complexity, data structures, automata size etc. We refer
to [38,32] for many examples. Typically one proceeds by showing that a
communication complexity problem f is "embedded" in the computational
problem P of interest, and then uses communication complexity lower bounds
on f to establish lower bounds on P . Similarly, quantum communication complexity
has been used to establish lower bounds in various models of quantum
computation, though such applications have received relatively little attention
so far. We will briefly mention some.
Yao [53] initially introduced quantum communication complexity as a tool
for proving a superlinear lower bound on the quantum formula size of the
majority function (a "formula" is a circuit of restricted form). More recently,
Klauck [34] used one-way quantum communication complexity lower bounds
to prove lower bounds on the size of quantum formulae.
Since upper bounds on query complexity give upper bounds on communication
complexity (Lemma 1), lower bounds on communication complexity give lower
bounds on query complexity. For instance, IP(x; so the
\Omega\Gamma n) bound for IP (Section 6.2) implies an \Omega\Gamma n= log n) lower bound for the
quantum query complexity of the parity function, as observed by Buhrman,
Cleve, and Wigderson [17]. This lower bound was later strengthened to n=2
in [5,28].
Furthermore, as in the classical case, lower bounds on (one-way) communication
complexity imply lower bounds on the size of finite automata. This was
used by Klauck [34] to show that Las Vegas quantum finite automata cannot
be much smaller than classical deterministic finite automata.
Finally, Ben-Or [7] has recently applied the lower bounds for IP in a new proof
of the security of quantum key distribution.
8 Other Developments and Open Problems
Here we mention some other results in quantum communication complexity
or related models:
ffl Quantum sampling. For the sampling problem, Alice and Bob do not
want to compute some f(x; y), but instead want to sample an (x; y)-pair
according to some known joint probability distribution, using as little communication
as possible. Ambainis et.al. [2] give a tight algebraic characterization
of quantum sampling complexity, and exhibit an exponential gap
between the quantum and classical communication required for a sampling
problem related to disjointness.
ffl Spooky communication. Brassard, Cleve, and Tapp [14] exhibit tasks
which can be achieved in the quantum world with entanglement and no
communication, but which would require communication in the classical
world. They also give upper and lower bounds on the amount of classical
communication needed to "simulate" EPR-pairs. Their results may be
viewed as quantitative extensions of the famous Bell inequalities [6].
ffl Las Vegas protocols. In this paper we just considered two modes of com-
putation: exact and bounded-error. An intermediate type of protocols are
zero-error or Las Vegas protocols. These never output an incorrect answer,
but may claim ignorance with probability at most 1/2. Some quantum-
classical separations for zero-error protocols may be found in [18,34].
One-way communication. Suppose the communication is one-way: Alice
just sends qubits to Bob. Klauck [34] showed for all total functions that
quantum communication is not significantly better than classical communication
in this case.
ffl Rounds. It is well known in classical communication complexity that allowing
Alice and Bob k+1 rounds of communication instead of k can reduce
the required communication exponentially. An analogous result has recently
been shown for quantum communication [41] (see also [36]).
ffl Non-deterministic communication complexity. A non-deterministic
protocol has positive acceptance probability on input (x; y) iff f(x;
Classically, the non-deterministic communication complexity is characterized
by the logarithm of the cover number of the communication matrix M f .
Recently, de Wolf [49] showed that the quantum non-deterministic communication
complexity is characterized (up to a factor of 2) by the logarithm
of the rank of a non-deterministic version of M f .
Finally, here's a list of interesting open problems in quantum communication
complexity:
ffl Raz's exponential gap only holds for a promise problem. Are D(f) and
polynomially related for all total f? As we showed in Section 6.1, a
positive answer to this question would be implied by the classical log-rank
conjecture. A similar question can be posed for the relation between R 2
and Q (f ).
ffl Does entanglement add much power to qubit communication? That is, what
are the biggest gaps between Q(f) and Q (f ), and between Q 2
(f) and
ffl Develop good lower bound techniques for bounded-error quantum protocols.
In particular one that gives a good lower bound for disjointness.
ffl Classically, Yao [51] used the minimax theorem from game theory to show an
equivalence between deterministic protocols with a probability distribution
on the inputs, and bounded-error protocols. Is some relation like this true in
the quantum case as well? If so, lower bound techniques for exact quantum
protocols can be used to deal with the previous question.
--R
Quantum dense coding and a lower bound for 1-way quantum finite automata
The quantum communication complexity of sampling.
Complexity classes in communication complexity theory.
Elementary gates for quantum computation.
Quantum lower bounds by polynomials.
On the Einstein-Podolsky-Rosen paradox
Security of quantum key distribution.
Teleporting an unknown quantum state via dual classical and Einstein- Podolsky-Rosen channels
Communication via one- and two-particle operators on Einstein-Podolsky-Rosen states
Strengths and weaknesses of quantum computing.
Quantum information theory.
Quantum complexity theory.
Tight bounds on quantum searching.
The cost of exactly simulating quantum entanglement with classical communication.
Quantum computing and communication complexity.
Quantum entanglement and communication complexity.
Quantum vs. classical communication and computation (preliminary version).
Bounds for small-error and zero-error quantum algorithms
Multiparty quantum communication complexity.
Communication complexity lower bounds by polynomials.
Substituting quantum entanglement for communication.
Quantum entanglement and the communication complexity of the inner product function.
Quantum algorithms revisited.
Quantum theory
Quantum computational networks.
Rapid solution of problems by quantum computation.
Can quantummechanical description of physical reality be considered complete?
A limit on the speed of quantum computation in determining parity.
Forbidden intersections.
A fast quantum mechanical algorithm for database search.
Bounds for the quantity of information transmitted by a quantum communication channel.
The probabilistic communication complexity of set intersection.
On quantum and probabilistic communication: Las Vegas and one-way protocols
Quantum communication complexity.
On rounds in quantum communication.
Quantum communication.
Communication Complexity.
Las Vegas is better than determinism in VLSI and distributed computing.
Optimal lower bounds for quantum automata and random access codes.
Interaction in quantum communication complexity.
Private vs. common random bits in communication complexity.
Quantum Information Theory.
Quantum Computation and Quantum Information.
Exponential separation of quantum and classical communication complexity.
On the distributional complexity of disjointness.
Physicists triumph at Guess my Number.
Classical versus quantum communication complexity.
Characterization of non-deterministic quantum query and quantum communication complexity
A single quantum cannot be copied.
Probabilistic computations: Toward a unified measure of complexity.
Some complexity questions related to distributive computing.
Quantum circuit complexity.
Grover's quantum searching algorithm is optimal.
--TR
Private vs. common random bits in communication complexity
The probabilistic communication complexity of set intersection
On the distributional complexity of disjointness
A fast quantum mechanical algorithm for database search
Public vs. private coin flips in one round communication games (extended abstract)
Quantum Complexity Theory
Strengths and Weaknesses of Quantum Computing
Communication complexity and parallel computing
Communication complexity
Quantum vs. classical communication and computation
Exponential separation of quantum and classical communication complexity
Dense quantum coding and a lower bound for 1-way quantum automata
Classical versus quantum communication complexity
On quantum and probabilistic communication
Interaction in quantum communication and the complexity of set disjointness
On communication over an entanglement-assisted quantum channel
Quantum Entanglement and Communication Complexity
Quantum Entanglement and the Communication Complexity of the Inner Product Function
Lower Bounds in the Quantum Cell Probe Model
Improved Quantum Communication Complexity Bounds for Disjointness and Equality
On Quantum Versions of the Yao Principle
Randomized Simultaneous Messages
Characterization of Non-Deterministic Quantum Query and Quantum Communication Complexity
The Quantum Communication Complexity of Sampling
Quantum Lower Bounds by Polynomials
Optimal Lower Bounds for Quantum Automata and Random Access Codes
Bounds for Small-Error and Zero-Error Quantum Algorithms
Las Vegas is better than determinism in VLSI and distributed computing (Extended Abstract)
Some complexity questions related to distributive computing(Preliminary Report)
Communication Complexity Lower Bounds by Polynomials
Lower Bounds for Quantum Communication Complexity
--CTR
Franois Le Gall, Exponential separation of quantum and classical online space complexity, Proceedings of the eighteenth annual ACM symposium on Parallelism in algorithms and architectures, July 30-August 02, 2006, Cambridge, Massachusetts, USA
Dmitry Gavinsky , Julia Kempe , Oded Regev , Ronald de Wolf, Bounded-error quantum state identification and exponential separations in communication complexity, Proceedings of the thirty-eighth annual ACM symposium on Theory of computing, May 21-23, 2006, Seattle, WA, USA
Dmitry Gavinsky , Julia Kempe , Iordanis Kerenidis , Ran Raz , Ronald de Wolf, Exponential separations for one-way quantum communication complexity, with applications to cryptography, Proceedings of the thirty-ninth annual ACM symposium on Theory of computing, June 11-13, 2007, San Diego, California, USA | communication complexity;quantum computing |
633565 | Logical foundations of cafeOBJ. | This paper surveys the logical and mathematical foundations of CafeOBJ, which is a successor of the famous algebraic specification language OBJ but adds to it several new primitive paradigms such as behavioural concurrent specification and rewriting logic.We first give a concise overview of CafeOBJ. Then we focus on the actual logical foundations of the language at two different levels: basic specification and structured specification, including also the definition of the CafeOBJ institution. We survey some novel or more classical theoretical concepts supporting the logical foundations of CafeOBJ, pointing out the main results but without giving proofs and without discussing all mathematical details. Novel theoretical concepts include the coherent hidden algebra formalism and its combination with rewriting logic, and Grothendieck (or fibred) institutions. However, for proofs and for some of the mathematical details not discussed here we give pointers to relevant publications.The logical foundations of CafeOBJ are structured by the concept of institution. Moreover, the design of CafeOBJ emerged from its logical foundations, and institution concepts played a crucial rle in structuring the language design. | Introduction
CafeOBJ is an executable industrial strength algebraic specification language which
is a modern successor of OBJ and incorporating several new algebraic specification
paradigms. Its definition is given in [12]. CafeOBJ is intended to be mainly used
for system specification, formal verification of specifications, rapid prototyping,
programming, etc. Here is a brief overview of its most important features.
On leave from the Institute of Mathematics of the Romanian Academy.
Preprint submitted to Elsevier Preprint 26 February 2000
Equational Specification and Programming.
This is inherited from OBJ [27, 17] and constitutes the basis of the language, the
other features being somehow built on top of it. As with OBJ, CafeOBJ is executable
(by term rewriting), which gives an elegant declarative way of functional
programming, often referred as algebraic programming. 3 As with OBJ, CafeOBJ
also permits equational specification modulo several equational theories such as
associativity, commutativity, identity, idempotence, and combinations between all
these. This feature is reflected at the execution level by term rewriting modulo such
equational theories.
Behavioural Specification.
Behavioural specification [21, 22, 13, 29] provides a novel generalisation of ordinary
algebraic specification. Behavioural specification characterises how objects
(and systems) behave, not how they are implemented. This new form of abstraction
can be very powerful in the specification and verification of software systems since
it naturally embeds other useful paradigms such as concurrency, object-orientation,
constraints, nondeterminism, etc. (see [22] for details). Behavioural abstraction is
achieved by using specification with hidden sorts and a behavioural concept of satisfaction
based on the idea of indistinguishability of states that are observationally
the same, which also generalises process algebra and transition systems (see [22]).
CafeOBJ behavioural specification paradigm is based on coherent hidden algebra
(abbreviated 'CHA') of [13], which is both an simplification and extension of classical
hidden algebra of [22] in several directions, most notably allowing operations
with multiple hidden sorts in the arity. Coherent hidden algebra comes very close
to the "observational logic" of Bidoit and Hennicker [29].
CafeOBJ directly supports behavioural specification and its proof theory through
special language constructs, such as
hidden sorts (for states of systems),
behavioural operations (for direct "actions" and "observations" on states of sys-
tems),
behavioural coherence declarations for (non-behavioural) operations (which may
be either derived (indirect) "observations" or "constructors" on states of sys-
tems), and
behavioural axioms (stating behavioural satisfaction).
The advanced coinduction proof method receives support in CafeOBJ via a default
coinduction relation (denoted =*=). In CafeOBJ, coinduction can
be used either in the classical hidden algebra sense [22] for proving behavioural
Please notice that although this paradigm may be used as programming, this aspect is
still secondary to its specification side.
equivalence of states of objects, or for proving behavioural transitions (which appear
when applying behavioural abstraction to rewriting logic). 4
Besides language constructs, CafeOBJ supports behavioural specification and verification
by several methodologies. 5 CafeOBJ currently highlights a methodology
for concurrent object composition which features high reusability not only of
specification code but also of verifications [12, 30]. Behavioural specification in
CafeOBJ may also be effectively used as an object-oriented (state-oriented) alternative
for classical data-oriented specifications. Experiments seem to indicate that
an object-oriented style of specification even of basic data types (such as sets, lists,
etc.) may lead to higher simplicity of code and drastic simplification of verification
process [12].
Behavioural specification is reflected at the execution level by the concept of behavioural
rewriting [12, 13] which refines ordinary rewriting with a condition ensuring
the correctness of the use of behavioural equations in proving strict equalities
Rewriting Logic Specification.
Rewriting logic specification in CafeOBJ is based on a simplified version of Mese-
guer's rewriting logic (abbreviated as `RWL') [32] specification framework for concurrent
systems which gives a non-trivial extension of traditional algebraic specification
towards concurrency. RWL incorporates many different models of concurrency
in a natural, simple, and elegant way, thus giving CafeOBJ a wide range of
applications. Unlike Maude [2], the current CafeOBJ design does not fully support
labelled RWL which permits full reasoning about multiple transitions between
states (or system configurations), but provides proof support for reasoning about the
existence of transitions between states (or configurations) of concurrent systems via
a built-in predicate (denoted ==>) with dynamic definition encoding both the proof
theory of RWL and the user defined transitions (rules) into equational logic.
From a methodological perspective, CafeOBJ develops the use of RWL transitions
for specifying and verifying the properties of declarative encoding of algorithms
(see [12]) as well as for specifying and verifying transition systems.
4 However, until the time this paper was written, the latter has not been yet explored suffi-
ciently, especially practically.
5 This is still an open research topic, the current methodologies may be developed further
and new methodologies may be added in the future.
Module System.
The principles of the CafeOBJ module system are inherited from OBJ which
builds on ideas first realized in the language Clear [1], most notably institutions
[19, 15]. CafeOBJ module system features
several kinds of imports,
sharing for multiple imports,
parameterised programming allowing
multiple parameters,
views for parameter instantiation,
integration of CafeOBJ specifications with executable code in a lower level
language
module expressions.
However, the concrete design of the language revise the OBJ view on importation
modes and parameters [12].
Type System and Partiality.
CafeOBJ has a type system that allows subtypes based on order sorted algebra
(abbreviated 'OSA') [25, 20]. This provides a mathematically rigorous form of
runtime type checking and error handling, giving CafeOBJ a syntactic flexibility
comparable to that of untyped languages, while preserving all the advantages of
strong typing.
At this moment the concrete order sortedness formalism is still open at least at the
level of the language definition. CafeOBJ does not directly do partial operations
but rather handles them by using error sorts and a sort membership predicate in the
style of membership equational logic (abbreviated 'MEL') [33]. The semantics of
specifications with partial operations is given by MEL.
Logical semantics.
CafeOBJ is a declarative language with firm mathematical and logical foundations
in the same way as other OBJ-family languages (OBJ, Eqlog [23, 4], FOOPS [24],
Maude [32]) are. The mathematical semantics of CafeOBJ is based on state-of-the-
art algebraic specification concepts and results, and is strongly based on category
theory and the theory of institutions [19, 11, 9, 15]. The following are the principles
governing the logical and mathematical foundations of CafeOBJ:
P1. there is an underlying logic 6 in which all basic constructs and features of
6 Here "logic" should be understood in the modern relativistic sense of "institution" which
the language can be rigorously explained.
P2. provide an integrated, cohesive, and unitary approach to the semantics
of specification in-the-small and in-the-large.
P3. develop all ingredients (concepts, results, etc.) at the highest appropriate
level of abstraction.
CafeOBJ is a multi-paradigm language. Each of the main paradigms implemented
in CafeOBJ is rigorously based on some underlying logic; the paradigms resulting
from various combinations are based on the combination of logics. The structure of
these logics is shown by the following CafeOBJ cube, where the full arrows mean
embedding between the logics, which correspond to institution embeddings (i.e., a
strong form of institution morphisms of [19, 15]) (the orientation of arrows goes
from "more complex" to "less complex" logics).
HA
MSA RWL
OSRWL
HOSA
OSA
=many
rewriting logic
The mathematical structure represented by this cube is that of an indexed institution
[11]. The CafeOBJ institution is a Grothendieck (or fibred) institution [11]
obtained by applying a Grothendieck construction to this cube (i.e., the indexed
institution). Note that by employing other logical-based paradigms the CafeOBJ
cube may be thought as a hyper-cube (see [12] for details).
1.1 Summary of the paper
The first part of this paper is dedicated to the foundations of basic specifications.
The main topic of this part is the definition of HOSRWL, the hidden order sorted
rewriting logic institution, which embeds all other institutions of the CafeOBJ
cube. In this way, the HOSRWL institution contains the mathematical foundations
for all basic specification CafeOBJ constructs.
The second part of the paper presents the novel concept of Grothendieck institution
provides a mathematical definition for a logic (see [19]) rather than in the more classical
sense.
(developed by [11]) which constructs the CafeOBJ institution from the CafeOBJ
cube.
The last section contains the definitions of the main mathematical concepts for
structuring specification in CafeOBJ.
The main concepts of the logical foundations of CafeOBJ are illustrated with several
examples, including CafeOBJ code. We assume familiarity with CafeOBJ
including its syntax and semantics (see [12] or several papers such as [14]).
Terminology and Notations
This work assumes some familiarity with basic general algebra (in its many-sorted
and order-sorted form) and category theory. Relevant background in general algebra
can be found in [18, 26, 34] for the many-sorted version, and in [25, 20] for
the order-sorted version. For category theory we generally use the same notations
and terminology as Mac Lane [31], except that composition is denoted by ";" and
written in the diagrammatic order. The application of functions (functors) to arguments
may be written either normally using parentheses, or else in diagrammatic
order without parentheses, or, more rarely, by using sub-scripts or super-scripts.
The category of sets is denoted as Set, and the category of categories 7 as C at . The
opposite of a category C is denoted by C op . The class of objects of a category C is
denoted by jC j; also the set of arrows in C having the object a as source and the
object b as target is denoted as C (a;b).
Indexed categories [35] play an important r"ole in for this work. [36] constitutes a
good reference for indexed categories and their applications to algebraic specifica-
tion. An indexed category [36] is a sometimes we denote
for an index i 2 jIj and B(u) as B u for an index morphism u 2 I.
The following 'flattening' construction providing the canonical fibration associated
to an indexed category is known under the name of the Grothendieck construc-
tion) and plays an important r"ole in mathematics and in particular in this paper.
Given an indexed category be the Grothendieck category having
objects and hu; ji :
arrows. The composition of arrows in B ] is defined
by hu; ji;hu
7 We steer clear of any foundational problem related to the "category of all categories";
several solutions can be found in the literature, see, for example [31].
2 Foundations of Basic Specifications
At this level, semantics of CafeOBJ is concerned with the semantics of collections
of specification statements. CafeOBJ modules can be flattened to such basic specifications
by an obvious induction process on the module composition structure. In
CafeOBJ we can have several kinds of specifications, the basic kinds corresponding
to the basic CafeOBJ specification/programming paradigms:
equational specifications,
rewriting specifications,
behavioural specifications, and
behavioural rewriting specifications.
The membership of a basic specification to a certain class is determined by the
CafeOBJ convention that each basic specification should be regarded as implementing
the simplest possible combination of paradigms resulting from its syntactic
content.
2.1 Loose and Tight Denotation
The key concept of specification in-the-small is the satisfaction relation between
the models and the sentences of a given specification, which is also the key notion
of the abstract concept of institution. Each kind of specification has its own concept
of satisfaction, and Section 2.2 surveys them briefly.
Each class of basic specifications has an underlying logic in the CafeOBJ cube.
Specifications can be regarded as finite sets of sentences in the underlying logic.
This enables us to formulate the principle of semantics of CafeOBJ specification
in-the-small:
Each basic specification determines a theory in the corresponding
institution. The denotation [[SP]] of a basic specification SP is the class of
models MOD(T SP ) of its corresponding theory T SP if loose, and it is the
initial model 0 T SP of the theory, if tight.
A basic specification can have either loose or initial denotation, and this can be directly
specified by the user. CafeOBJ does not directly implement final semantics,
however final models play an important r"ole for the loose semantics of behavioural
specifications (see [13, 8]).
Initial model semantics applies only to non-behavioural specification, and is supported
by the following result:
Theorem 1 Let T be a theory in either MSA, OSA, RWL, or OSRWL. Then the
initial model 0 T exists.
This very important result appears in various variants and can be regarded as a
classic of algebraic specification theory. The reader may wish to consult [26] for
MSA, [25, 20] for OSA, [32] for RWL, and although, up to our knowledge, the
result has not yet been published, it is also valid for OSRWL.
Because of the importance of the construction of the initial model we briefly recall
it here. Let S be the signature of the theory consisting of a set S of sorts (which is a
partial order in the order-sorted case) and a ranked (by S ) set of operation symbols
(possibly overloaded). The S-sorted set T S of S-terms is the least S-sorted set closed
under:
- each constant is a S-term (denoted S [];s
ng.
The operations in S can be interpreted on T S in the obvious manner, thus making
it into a S-algebra 0 S . If T is equational, then its ground part is a congruence T
on 0 S . Then 0 T is the quotient whose carriers are equivalence classes of
S-terms under T . If T is a pure rewriting theory then 0 T is a rewriting logic model
whose carriers (0 T ) s are categories with S-terms as objects and concurrent rewrite
sequences (using the rules of T ) as arrows. Finally, rewrite theories including equations
require the combination between the above two constructions.
Example 2 Consider the following CafeOBJ specification of non-deterministic
natural numbers:
mod! NNAT {
protecting(NAT)
trans M:Nat | N:Nat => M .
trans M:Nat | N:Nat => N .
The denotation of NNat is initial and consists [of the isomorphism class] of one
model, 0 NNat
, the initial model. The main carrier of 0 NNat
is a category which has
non-empty lists of natural numbers as objects and deletion sequences as arrows.
| gets interpreted as a functor which concatenates lists of numbers on objects,
and compose in parallel ("horizontally") deletion sequences.
2.2 Hidden Order Sorted Rewriting Logic Institution
We devote this section to the definition of the HOSRWL institution (defined for
the first time in [8] in the many sorted version) which embeds all CafeOBJ cube
institutions. We recall here that the behavioural specification part of HOSRWL is
based on the 'coherent hidden algebra' of [13]. The deep understanding of HOS-
RWL requires further reading on its main components ([32] for RWL and [13] for
CHA) as well as their integration [8].
Signatures
Definition 3 A HOSRWL signature is a tuple (H;V;;S;S b ), where
(H;) and (V;) are disjoint partial ordered sets of hidden sorts and visible
respectively,
S is a (H [V;)-order-sorted signature,
is a subset of behavioural operations such that s 2 S b
w;s has exactly one
hidden sort in w.
Notice that we may simplify the notation (H;V;;S;S b ) to just (H;V;;S) , or
just S, when no confusion is possible.
From a methodological perspective, the operations in S b have object-oriented mean-
w;s is thought as an action (or "method" in a more classical jargon) on the
space (type) of states if s is hidden, and thought as observation (or "attribute" in
a more classical jargon) if s is visible. The last condition says that the actions and
observations act on (states of) single objects.
is an order-sorted signature morphism (H
such that
hidden sorts h;h
These conditions say that hidden sorted signature morphisms preserve visibility and
invisibility for both sorts and operations, and the S 0 b F(S b ) inclusion together
with (M3) expresses the encapsulation of classes (in the sense that no new actions
(methods) or observations (attributes) can be defined on an imported class) 8 . How-
ever, these conditions apply only to the case when signature morphisms are used as
8 Without it the Satisfaction Condition fails, for more details on the logical and computational
relevance of this condition see [21].
module imports (the so-called horizontal signature morphisms); when they model
specification refinement this condition might be dropped (this case is called vertical
signature morphism).
Proposition 5 HOSRWL signatures and signature morphisms (with the obvious
composition) form a category denoted as Sign
HOSRWL .
Sentences
In HOSRWL there are several kinds of sentences inherited from the various CafeOBJ
cube institutions.
Definition 6 Consider a HOSRWL signature (H;V;;S;S b ). Then a (strict) equation
is a sentence of the form
where X is a (H [V )-sorted set of variables, t; t 0 are S-terms with variables X , and
C is a Boolean(-sorted) S-term,
a behavioural equation is a sentence of the form
a (strict) transition is a sentence of the form
and a behavioural transition is a sentence of the form
have the same meaning as for strict equations.
All these sentences are here defined in the conditional form. If the condition is
missing (which is equivalent to saying it is always true), then we get the unconditional
versions of sentences. Notice also that our approach to conditional sentences
is slightly different from the literature in the sense that the condition is a Boolean
term rather than a finite conjunction of formul. Our approach is more faithful to
the concrete level of CafeOBJ and is also more general. This means that a finite
conjunction of formul can be translated to a Boolean term by using some special
semantic predicates (such as == for semantics equality and ==> for the semantic
transition relation, in CafeOBJ). We do not discuss here the full details of this ap-
proach, we only further mention that the full rigorous treatment of such conditions
can be achieved within the so-called constraint logic [10], which can however be
regarded as a special case of an abstract categorical form of plain equational logic
[5, 4,
Equational attributes such as associativity (A), commutativity (C), identity (I), or
idempotence (Z) are just special cases of strict equations. However, the behavioural
part of HOSRWL has another special attribute called behavioural coherence [12,
13] which is regarded as a sentence:
Definition 7 Let (H;V;;S;S b ) be a signature. Then
s coherent
is a behavioural coherence declaration for s, where s is any operation S.
Definition 8 Given a signature morphism
the translation of sentences is defined by replacing all operation symbols from S
with the corresponding symbols (via F) from S 0 and by re-arranging the sort of the
variables involved accordingly to the sort mapping given by F.
Fact 9 If we denote the set of sentences of a signature (H;V;;S;S b ) by
Sen HOSRWL (H;V;;S;S b ) and the sentence translation corresponding to a signature
morphism F by Sen HOSRWL (F), then this gets a sentence functor
Models
Models of HOSRWL are rewrite models [32] which are (algebraic) interpretations
of the signatures into C at (the category of small categories) rather than in Set (the
category of small sets) as in the case of ordinary algebras. Thus, ordinary algebras
can be regarded as a special case of rewrite models with discrete carriers.
Given a HOSRWL signature (H;V;;S;S b ), a HOSRWL model M
interprets:
each sort s as a small category M s and each subsort relation s < s 0 as sub-category
relation
, and
each operation s 2 S w;s as a functor
Notice that each S-term by evaluating it
for each assignment of the variables occurring in t with objects or arrows from the
corresponding carriers of M.
Model homomorphisms in HOSRWL follow an idea of [29] by refining the ordinary
concept of model morphism and reforming the hidden algebra [21, 22] homomorphisms
by taking adequate care of the behavioural structure of models. We need
first to define the concept of behavioural equivalence.
Definition 11 Recall that a S-context c[z] is any S-term c with a marked variable
z occurring only once in c. A context c[z] is behavioural iff all operations above z
are behavioural.
Given a model M, two elements (of the same sort s; they can be either both objects
or both arrows) a and a 0 are called behaviourally equivalent, denoted a s a 0 (or
just a a 0 ) iff 9
for all visible behavioural contexts c.
Remark that the behavioural equivalence is a (H [V )-sorted equivalence relation,
and on the visible sorts the behavioural equivalence coincides with the (strict)
equality relation.
Now we are ready to give the definition of model homomorphism in HOSRWL.
between models of a signature (H;V;
between the carriers such that
(for each sort s):
for all a 2 M s there exists a
s (a and a 0 can be either both arrows or both
elements) such that a h s a 0 ,
for all a 2 M s , if a h a 0 then (a h b 0 if and only if a 0 b 0 ),
for all a;b 2 M s and a
s , if a h a 0 and a b then b h a 0 , and
for each operation s2S w;s , for all a 2M w and a 0 2M 0
w , a h w a 0 implies M s (a) h w M 0
(component-wise).
Notice that when there are no hidden sorts (i.e., we are in some non-behavioural part
of HOSRWL), this concept of model homomorphism coincides with the rewriting
model homomorphism.
For a given signature (H;V;;S), we denote its category of models by MOD HOSRWL .
Notice that any signature morphism
a model reduct
in the usual way (by renaming the sorts of the carriers and the interpretations
of the operations accordingly to the mapping of sorts and operations given by
F). Therefore we have a contravariant model functor
at op .
9 Notice that this equality means an equality between functions M
, where
This is a relation between the sets of objects together with a relation between the sets of
arrows, such that this couple of relations commute with the domain functions, codomain
functions, and arrow composition functions.
Satisfaction
The satisfaction relation between sentences and models is the crucial concept of an
institution (see Definition 19).
Consider a model M of a signature (H;V;;S;S b ). Then M satisfies
an equation, i.e., M only if
for all valuations (Notice that this applies both for objects and arrows,
since valuations may map variables either to objects or arrows.)
M satisfies a behavioural equation, i.e., M only if
for all valuations
M satisfies a transition, i.e., M only if for each
"object there is an arrow a
M C (q) is true such that for all "arrow we have a
a q
M satisfies a behavioural transition, i.e., M only if
for each appropriate visible behavioural context c and for each "object valuation"
there is an arrow a c
true such that for all "arrow we have a c
Finally, M satisfies a coherence declaration, i.e., M coherent ), if and only
if s preserves the behavioural equivalence on M, i.e.,
for all a;a 0 2 Mw .
11 I.e. a is a natural transformation.
Notice that the behavioural coherence of both the behavioural operations and of
operations of a visible rank is trivially satisfied.
Example 14 Consider the following CafeOBJ behavioural specification of non-deterministic
naturals:
mod* NNAT-HSA {
protecting(NAT)
*[ NNat ]*
vars
vars
or S2 -> N .
Notice that for all models M of NNAT-HSA,
This situation when the operations which are neither behavioural or a data type
operations (i.e. with visible rank) are automatically coherent is rather natural and
occurs very often in practice, and this corresponds to the so-called coherence conservative
methodology of [13].
The definition of the satisfaction relation between sentences and models completes
the construction of the HOSRWL institution:
Theorem 15 (Sign HOSRWL ; Sen HOSRWL ; MOD HOSRWL ; j=) is an institution.
For the definition of institution see Definition 19 given below. We omit here the
proof of this result which is rather long and tedious and follows the same pattern as
proofs of similar results, also reusing some of them (such as the proof that RWL is
an institution).
At the end of the presentation of the HOSRWL institution we give a brief example
of a CafeOBJ specification in HOSRWL:
Example Consider a behavioural specification of sets of non-determinstic naturals
mod* SETS {
protecting(NNAT)
vars
vars S
or (E in S) .
in S1) or (E in S2) .
in S1) and (E in S2) .
not (E in S1) .
where NNAT is the RWL specification of non-determinstic naturals of Example
2. Notice that each model of SETS satisfies the usual set theory rules (such as
commutativity and associativity of union and intersection, De Morgan laws, etc.)
only behaviourally, not necessarily in the strict sense. For example, the following
De Morgan behavioural rule
is a consequence of the specification SETS. Also, the following behavioural tran-
sition
btrans add(M | N, S) => add(M, S) .
is a consequence of SETS too.
Specifications in full HOSRWL naturally occurs in the case of a behavioural specification
using concurrent (RWL) data types. However the practical significance of
full HOSRWL is still little understood. The real importance of the HOSRWL institution
is its initiality in the CafeOBJ cube. We will see below that the existence
of all possible combinations between the main logics/institutions of CafeOBJ is
crucial for the good properties of the CafeOBJ institution.
2.3 Operational vs. Logical Semantics
The operational semantics underlies the execution of specifications or programs.
As with OBJ, the CafeOBJ operational semantics is based on rewriting, which in
the case of proofs is used without directly involving the user defined transitions
(rules) but rather involving them via the built-in semantic transition predicate ==>.
For executions of concurrent systems specified in rewriting logic, CafeOBJ uses
both the user-defined transitions and equations.
Since rewriting is a very well know topic in algebraic specification, we do not insist
here on the standard aspects of rewriting. However, the operational semantics of
behavioural specification requires a more sophisticated notion of rewriting which
takes special care of the use of behavioural sentences during the rewriting process,
which we call behavioural rewriting [12, 13]:
Definition 17 Given a HOSRWL signature S and a S-algebra A, a behaviourally
coherent context for A is any S-context c[z] such that all operations above 12 the
marked variable z are either behavioural or behaviourally coherent for A.
Notice that any behavioural context is also behaviourally coherent.
The following Proposition from [13] ensures the soundness of behavioural rewriting
Proposition Consider a HOSRWL signature S, a set E of S-sentences regarded
as a TRS (i.e. term rewriting system), and a S-algebra A satisfying the sentences
in E. If t 0 is a ground term and for any rewrite step t which uses a behavioural
equation from E, the rewrite context has a visible behaviourally coherent
sub-context for A, then A
If the rewrite context is behaviourally
coherent for A, then A
The completeness of the operational semantics with respect to the logical semantics
is a two-layer completeness going via the important intermediate level of the proof
calculi.
Denotational
Proof
Calculus
Operational
The completeness of the proof calculus is one of the most important class of results
in algebraic specification, for equational logic we refer to [25], and for rewriting
logic to [32]. In the case of rewriting logic the relationship between the proof calculus
and rewriting is very intimate, but for equational logic the completeness of
rewriting can be found, among other many places, in [18, 7].
Notice that hidden logics of the CafeOBJ cube do not admit a complete (finitary)
proof calculus. However, advanced proof techniques support the verification process
in the case of behavioural specifications, most notably the hidden coinduction
method (see [22] for the original definition, [12, 13] for its realization in CafeOBJ,
and [6] for the details for the case of proving behavioural transitions).
12 Meaning that z is in the subterm determined by the operation.
3 The CafeOBJ Institution
In this section we define the CafeOBJ institution, which is a Grothendieck construction
on the CafeOBJ cube. The Grothendieck construction for institutions
was first introduced by [11] and generalises the famous Grothendieck construction
for categories [28]. The essence of this Grothendieck construction is that it constructs
a 'disjoint sum' of all institutions of the CafeOBJ cube, also introducing
theory morphisms across the institution embeddings of the CafeOBJ cube. Such
extra theory morphisms were first studied in [9]. However, one advantage of the
Grothendieck institutions is that they treat the extra theory morphisms as ordinary
theory morphisms, thus leading to a conceptual simplification with respect to [9].
The reader might wonder why one cannot live with HOSRWL only (which embeds
all the CafeOBJ cube institutions) and we still need a Grothendieck construction
on the CafeOBJ cube. The reason for this is that the combination of
logics/institutions realized by HOSRWL collapses crucial semantic information,
therefore a more refined construction which preserves the identity of each of the
CafeOBJ cube institutions, but yet allowing a concept of theory morphism across
the institution embeddings, is necessary. For example, in the case of specifications
with loose semantics without a RWL component, the carriers of the models of
these specifications should be sets rather than categories, which is not possible
in HOSRWL. Therefore, such specifications should be given semantics within the
appropriate institution of the CafeOBJ cube rather than in HOSRWL. Example 36
illustrates this argument.
3.1 Institutions
We now recall from [19] the definitions of the main institution concepts:
Definition 19 An institution consists of
(1) a category Sign, whose objects are called signatures,
(2) a functor Sen : Sign!Set, giving for each signature a set whose elements are
called sentences over that signature,
(3) a functor MOD : Sign op ! C at giving for each signature S a category whose
objects are called S-models, and whose arrows are called S-(model) mor-
phisms, and
a relation for each S2 jSignj, called S-satisfaction,
such that for each morphism j in Sign, the satisfaction condition
holds for each m 0 2 jMOD(S 0 )j and e 2 Sen(S). We may denote the reduct functor
MOD(j) by j and the sentence translation Sen(j) by j( ).
be an institution. For any signature S
the closure of a set E of S-sentences is . (S;E) is a theory if
and only if E is closed, i.e.,
A theory morphism
that j(E) E 0 . Let Th() denote the category of all theories in .
For any institution , the model functor MOD extends from the category of its
signatures Sign to the category of its theories Th(), by mapping a theory (S;E) to
the full subcategory MOD(S;E) of MOD(S) formed by the S-models which satisfy
liberal if and only if the
reduct functor
The institution is liberal if and only if each theory morphism is liberal.
Definition 22 An institution exact if and only if the
model functor MOD : Sign op ! C at preserves finite limits. is semi-exact if and
only if MOD preserves only pullbacks.
Definition 23 Let and 0 be institutions. Then an institution homomorphism
consists of
(1) a
(2) a natural transformation a
(3) a natural transformation
such that the following satisfaction condition holds
for any S 0 -model m 0 from 0 and any S 0 F-sentence e from .
Fact 24 Institutions and institution homomorphisms form a category denoted as
Ins.
The following properties of institution homomorphisms were defined in [11] and
play an important r"ole for Grothendieck institutions:
Definition 25 An institution homomorphism
means that M S-model M that satisfies all sentences in E .
an embedding iff F admits a left-adjoint F (with unit z); an institution embedding
is denoted as
liberal iff b S 0
has a left-adjoint b S 0
for each S 0 2 jSign 0 j.
An institution embedding exact if and only if the square
below is a pullback
MOD(SFF)
MOD(Sz)
(jF)
signature morphism in .
3.2 Indexed and Grothendieck Institutions
The following definition from [11] generalises the concept of indexed category [36]
to institutions.
Definition 26 An indexed institution is a functor : I op ! Ins.
The CafeOBJ cube is an indexed institution where the index category I is the
8-element lattice corresponding to the cube (i.e., the elements of the lattice correspond
to the nodes of the cube and the partial order is given by the arrows of the
cube).
Definition 27 The Grothendieck institution ] of an indexed institution : I op !
Ins has
(1) the Grothendieck category Sign ] as its category of signatures, where Sign: I op !
C at is the indexed category of signatures of the indexed institution ,
C at as its model functor, where
for each index i 2 jIj and signature S 2 jSign i j,
and
its sentence functor, where
for each index i 2 jIj and signature S 2 jSign i j, and
for each
S e for each index
For the category minded readers we mention that [11] gives a higher level characterisation
of the Grothendieck institution as a lax colimit in the 2-category Ins (with
institutions as objects, institution homomorphisms as 1-cells, and institution modifications
as 2-cells; see [11] for details) of the corresponding indexed institution.
This means that Grothendieck institutions are internal Grothendieck objects 14 in
Ins in the same way as Grothendieck categories are Grothendieck objects in C at .
For the fibred category minded readers, in [11] we also introduce the alternative formulation
of fibred institution and show that there is a natural equivalence between
split fibred institutions and Grothendieck institutions.
We would also like to mention that the concept of extra theory morphism [9] across
an institution homomorphism 0 ! (with all its subsequent concepts) is recuper-
ated as an ordinary theory morphism in the Grothendieck institution of the indexed
institution given by the homomorphism 0 ! (i.e., which has ! as its index
category).
Now we are ready to define the institution of CafeOBJ:
Definition 28 The CafeOBJ institution is the Grothendieck institution of the CafeOBJ
cube.
3.3 Properties of the CafeOBJ Institution
In this section, we briefly study the most important institutional properties of the
CafeOBJ institution: existence of theory colimits, liberality (i.e. free construc-
tions), and exactness (i.e. model amalgamation).
The institution homomorphisms of the CafeOBJ cube are all embeddings; this
makes the CafeOBJ cube an embedding-indexed institution (cf. [11]). As we will
see below, this property of the CafeOBJ cube plays an important r"ole for the properties
of the CafeOBJ institution.
Theory Colimits.
The existence of theory colimits is crucial for any module system in the Clear-OBJ
tradition. Let us recall the following result from [11]:
14 From [11], a Grothendieck object in a 2-category is a lax colimit of a 1-functor to that
2-category.
Theorem 29 Let : I op ! Ins be an embedding-indexed institution such that I is
J-cocomplete for a small category J. Then the category of theories Th(
) of the
Grothendieck institution ] has J-colimits if and only if the category of signatures
Sign i is J-cocomplete for each index i 2 jIj.
Corollary The category of theories of the CafeOBJ institution is small cocomplete
Notice that the fact that the lattice of institutions of the CafeOBJ cube is complete
(as a lattice) means exactly that the index category of the CafeOBJ cube is
(small) cocomplete, which is a precondition for the existence of theory colimits in
the CafeOBJ institution. In the absence of the combinations of logics/institution
of the CafeOBJ cube (such as HOSRWL), the possibility of theory colimits in the
CafeOBJ institution would have been lost.
Liberality.
Liberality is a desirable property in relation to initial denotations for structured
specifications. In the case of loose denotations liberality is not necessary. Since
the behavioural specification paradigm involves only loose denotations, in the case
of the CafeOBJ institution, we are therefore interested in liberality only for the
non-behavioural theories. Recall the following result from [11]:
Theorem 31 The Grothendieck institution ] of an indexed institution : I op !
Ins is liberal if and only if i is liberal for each index i 2 jIj and each institution
homomorphism u is liberal for each index morphism u 2 I.
Corollary In the CafeOBJ institution, each theory morphism between non-
behavioural theories is liberal.
Notice that this corollary is obtained from the theorem above by restricting the
index category to the non-behavioural square of the CafeOBJ cube, and from the
corresponding liberality results for equational and rewriting logics.
Exactness.
Firstly, let us extend the usual exactness results for equational and rewriting logics
to the CafeOBJ cube:
Proposition 33 All institutions of the CafeOBJ cube are semi-exact.
As shown in [9] and [11], in practice exactness is a property hardly achieved at the
global level by the Grothendieck institutions. In [11] we give a necessary and sufficient
set of conditions for (semi-)exactness of Grothendieck institutions. One of
them is the exactness of the institution embeddings, which fails for the embeddings
from the non-RWL institutions into the RWL institutions of the CafeOBJ cube. In
the absence of a desired global exactness property for the CafeOBJ institution, we
need a set of sufficient conditions for exactness for practically significant particular
cases. In [9] we formulate a set of such sufficient conditions, but this problem is
still open.
4 Foundations of Structured Specifications
In this section we survey the mathematical foundations of the CafeOBJ module
composition system. CafeOBJ module composition system follows the principles
of the OBJ module system which are inherited from earlier work on Clear [1].
Consequently, CafeOBJ module system is institution-independent (i.e., can be developed
at the abstract level of institutions) in the style of [15]. In the actual case
of CafeOBJ, the institution-independent semantics is instantiated to the CafeOBJ
institution. The following principle governs the semantics of programming in-the-
large in CafeOBJ:
(L) For each structured specification we consider the theory corresponding
to its flattening to a basic specification. The structuring constructs are
modelled as theory morphisms between these corresponding theories.
The denotation [[SP]] of a structured specification is determined from the
denotations of the components recursively via the structuring constructs
involved.
The general structuring mechanism is constituted by module expressions, which
are iterations of several basic structuring operations, such as (multiple) imports,
parameters, instantiation of parameters by views, translations, etc.
4.1 Module Imports
Module imports constitute the most primitive structuring construct in any module
composition system. The concept of module import in the institution-independent
semantics of CafeOBJ is based on the mathematical notion of inclusion system.
Module imports are modeled as inclusion theory morphisms between the
theories corresponding to flattening the imported and the importing modules
Inclusion systems where first defined by [15] for the institution-independent study
of structuring specifications. Weak inclusion systems were introduced in [3], and
they constitute a simplification of the original definition of inclusion systems of
[15]. We recall the definition of inclusion systems:
Definition 34 hI ; Ei is a weak inclusion system for a category C if I and E are
two sub-categories with jI
(1) I is a partial order, and
(2) every arrow f in C can be factored uniquely as
The arrows of I are called inclusions, and the arrows of E are called
tions. 15 The domain (source) of the inclusion i in the factorisation of f is called
the image of f and denoted as Im( f ). An injection is a composition between an
inclusion and an isomorphism.
A weak inclusion system hI ; Ei is an inclusion system iff I has finite least upper
bounds (denoted +) and all surjections are epics (see [15]).
The inclusion system for the category of theories of the CafeOBJ institution is obtained
by lifting the inclusion system for its category of signatures (see [15, 3]). The
inclusion system for the category of signatures is obtained from the canonical
inclusion systems of the categories of signatures of the CafeOBJ cube institutions
by using the following result from [11] (which appeared previously in a slightly
different form in [9]):
Theorem C at be an indexed category such that
I has a weak inclusion system hI I
has a weak inclusion system hI
preserves inclusions for each inclusion index morphism u 2 I I , and
preserves inclusions and surjections and lifts inclusions uniquely for each
surjection index morphism
Then, the Grothendieck category B ] has an inclusion system hI B
is
inclusion iff both u and j are inclusions, and
surjection iff both u and j are surjections.
In the case of the CafeOBJ institution, this result is applied for the indexed category
of signatures of the CafeOBJ cube.
Example 36 Consider the following module import:
mod* TRIV { [ Elt
15 Surjections of some weak inclusion systems need not necessarily be surjective in the
ordinary sense.
mod* NTRIV {
protecting(TRIV)
trans M:Elt | N:Elt => M .
trans M:Elt | N:Elt => N .
Module TRIV gets a MSA loose theory, which has all sets as its denotation. Module
NTRIV gets a RWL loose theory, which has as denotations categories with an
interpretation of j as an associative binary functor, and which satisfies the couple
of choice transitions of NTRIV. The module import TRIV ! NTRIV corresponds to
an injective extra theory morphism T across the forgetful institution
MSA.
More formally, the inclusion signature morphism underlying T TRIV ! T NTRIV can
be represented as hu; ji where u is the institution morphism RWL ! MSA and j
is the signature inclusion S TRIV ! u(S NTRIV ) (where S TRIV is the MSA signature
of TRIV, S NTRIV is the RWL signature of NTRIV, and u(S NTRIV ) is the reduct of
S NTRIV to a MSA signature). Notice that u is an inclusion since the CafeOBJ cube
admits a trivial inclusion system in which all arrows are inclusions, that the reduct
from RWL signatures to MSA signatures is an identity, and that S TRIV ! S NTRIV is
an inclusion of MSA signatures.
An interesting aspect of this example is given by its model theory. The denotation
of this module import is the model reduct functor MOD(T NTRIV
in the CafeOBJ institution. From Definition 27, this means b u
which means a two level reduction. The first level, b u
, means getting rid of
the arrows of the carrier (i.e. making the carrier discrete) of the model and regarding
the interpretation of j as a function rather than functor. The second level,
MOD MSA (j), is a reduction internal to MSA which forgets the interpretation of
j . It is very important to notice that the correct denotation for this module import
can be achieved only in the framework of the CafeOBJ institution, the fact
that this is a Grothendieck institution being crucial. None of the institutions of the
CafeOBJ cube (such as RWL for example) would have been appropriate to give
the denotation of this example.
We denote the partial order of module imports by . By following the OBJ tradi-
tion, we can distinguish between three basic kinds of imports, protecting, extend-
ing, and using. At the level of the language, these should be treated just as semantic
declarations which determine the denotation of the importing module from the denotation
of the imported module.
Definition 37 Given a theory morphism model M of T , an expansion
of M along j is a model M 0 of T 0 satisfying the following properties:
the expansion is protecting,
there is an injective 16 model homomorphism M ,! M 0 j iff the expansion is
extending,
there is an arbitrary model homomorphism M!M 0 j iff the expansion is using,
and
with respect to j iff the expansion is free.
Definition 38 Fix an import SPSP 0 and let T and T 0 be the theories corresponding
to SP and SP 0 , respectively. Then
is an expansion of the same kind as the importation
mode involved of some model M 2 (and in addition free if SP 0 is initial) g.
Multiple imports are handled by a lattice structure on imports. The (finite) least
upper bounds (called sums in [15]) of module imports corresponds to the weak inclusion
system of theory morphisms being a proper inclusion system. In [16] we
lift sums from inclusion systems for ordinary theory morphisms to extra theory
morphisms; this result can be easily translated to the conceptual framework of Grothendieck
institutions. The (finite) greatest lower bounds (called intersections) are
defined as the pullback of the sums.
The details of this construction for the inclusion system of extra theory morphisms
are given in [16]; also this construction can be easily translated to the conceptual
framework of Grothendieck institutions.
In practice, one of the important properties of the sum-intersection square is to
be a pushout besides being a pullback square. This result for the inclusion system
of extra theory morphisms is given in [16]; again this can be easily translated to
Grothendieck institutions.
Under a suitable concept of 'injectivity'.
17 Which means that M 0 is the free object over M with respect to the model reduct functor
This relies the construction of finite limits in Grothendieck (fibred) categories.
4.2 Parameterisation
Parameterised specification and programming is an important feature of all module
systems of modern specification or programming languages. In CafeOBJ The
mathematical concept of parameterised modules is based on injections (in the sense
of Definition 34) in the category of theories of the CafeOBJ institution:
Parameterised specifications SP(X :: P) are modelled as injective theory
morphisms from the theory corresponding to the parameter P to the
theory corresponding to the body SP. Views are modelled as theory morphisms
The denotation [[SP]] of the body is determined from the denotation of the parameter
accordingly to the parameterisation mode involved as in the case of module imports
(Definition 38).
We distinguish two opposite approaches on parameters: a shared and a non-shared
one. In the 'non-shared' approach, the multiple parameters are mutually disjoint
(i.e.,
for X and X 0 two different parameters) and they are also
disjoint from any module imports T 0 T (i.e.,
0). In the 'shared'
approach this principle is relaxed to being disjoint outside common imports, i.e.,
two different parameters and
. The 'non-shared' approach has the potentiality
of a much more powerful module system, while the 'shared' approach seems
to be more convenient to implement (see [12] for details). The CafeOBJ definition
gives the possibility of the whole range of situations between these two extremes
by giving the user the possibility to control the sharing.
Example 39 This is an example adapted from [12]. Consider the (double param-
eterised) specification of a 'power' operation on monoids, where powers are elements
of another (abstract) monoid rather than natural numbers.
mod* MON {
mod* MON-POW (POWER :: MON, M :: MON)
{
vars
vars
The diagram defining MON-POW is
MON-POW
POWER
r
r
r
r
r
r
r
r
r
r
r
r
where MON-POW consists of two copies of MON (labelled by M and, POWER) re-
spectively, plus the power operation together with the 3 axioms defining its action.
This means TRIV is not shared, since the power monoid and the base monoid are
allowed to have different carriers. The denotation consists of all protecting
expansions (with interpretations of " ) to MON-POW of non-shared amalgamations
of monoids corresponding to the two parameters.
In the 'shared' approach, the parameterisation diagram is
MON
z
z
z
z
z
z
z
z
z
z
z
z
z
z
z
z
MON
POWER
In this case, the denotation consists of all two different monoid structures
on the same set, plus an interpretation of " satisfying the 'power' equations.
In CafeOBJ such sharing can be achieved by the user by the command share
which has the effect of enforcing that the modules declared as shared are included
rather than 'injected' in the body specification. In this case we have just to specify
share(TRIV)
The following defines parameter instantiation by pushout technique for the case of
single parameters. This definition can be naturally extended to the case of multiple
parameters (for details about instantiation of multiple parameters in CafeOBJ see
[12]).
be a parameterised module and let T its
representation as theory morphism. Let be a view. Then the instantiation
T SP (v) is given by the following pushout of theory morphisms in the CafeOBJ
institution:
in the 'non-shared' approach, and by the following co-limit
in the 'shared' approach.
The semantics of parameter instantiation relies on preservation properties of conservative
extensions by pushouts of theory morphisms. Recall the concept of conservative
theory morphism from [15]:
Definition 41 A theory morphism conservative iff any model M of T
has a protecting expansion along j.
Preservation of conservative extensions in Grothendieck institutions is a significantly
harder problem than in ordinary institutions. Such technical results for Grothendieck
institutions have been obtained in [16] but within the conceptual frame-work
of extra theory morphisms.
5 Conclusions and Future Work
We surveyed the logical foundations of CafeOBJ which constitute the origin of the
concrete definition of the language [12]. Some of its main features are:
simplicity and effectiveness via appropriate abstractness,
cohesiveness,
flexibility,
provides support for multi-paradigm integration,
provides support for the development of specification methodologies, and
uses state-of-art methods in algebraic specification research.
We defined the CafeOBJ institution, overviewed its main properties, and presented
the main mathematical concepts and result underlying basic and structured specification
in CafeOBJ.
Besides theoretical developments, future work on CafeOBJ will mainly concentrate
on specification and verification methodologies, especially the object-oriented
ones emerging from the behavioural specification paradigm. This includes refining
the existing object composition methodology based on projection operations
[30, 14, 12] but also the development of new methodologies and careful identification
of the application domains most suitable to certain specification and verification
methodologies.
The development of CafeOBJ has been an interplay process among language de-
sign, language and system implementation, and methodology development. Although
the language design is based on solid and firm mathematical foundations, it
has been greatly helped by the existence of a running system, which gave the possibility
to run various relevant examples, thus giving important feedback at the level
of concrete language constructs and execution commands. The parallel development
of methodologies gave special insight on the relationship between the various
paradigms co-existing in CafeOBJ with consequences at the level of design of the
language constructs.
We think that the interplay among mathematical semantic design of CafeOBJ, the
system implementation, and the methodology development has been the most important
feature of CafeOBJ design process. We believe this promises the sound
and reasonable development of a practical formal specification method around
CafeOBJ.
--R
The semantics of Clear
Principles of Maude.
Virgil Emil C
Principles of OBJ2.
Theorem Proving and Algebra.
Institutions: Abstract model theory for specification and programming.
A hidden agenda.
An initial algebra approach to the specification
Observational logic.
Categories for the Working Mathematician.
Some fundamental algebraic tools for the semantics of computation
--TR
Initiality, induction, and computability
Unifying functional, object-oriented and relational programming with logical semantics
Some fundamental algebraic tools for the semantics of computation, part 3
Conditional rewriting logic as a unified model of concurrency
Order-sorted algebra I
Institutions: abstract model theory for specification and programming
Logical support for modularisation
Principles of OBJ2
Membership algebra as a logical framework for equational specification
Towards an Algebraic Semantics for the Object Paradigm
Observational Logic
The Semantics of CLEAR, A Specification Language
Component-Based Algebraic Specification and Verification in CafeOBJ
--CTR
Rzvan Diaconescu, Herbrand theorems in arbitrary institutions, Information Processing Letters, v.90 n.1, p.29-37, 15 April 2004
Rzvan Diaconescu, Behavioural specification for hierarchical object composition, Theoretical Computer Science, v.343 n.3, p.305-331, 17 October 2005
Miguel Palomino, A comparison between two logical formalisms for rewriting, Theory and Practice of Logic Programming, v.7 n.1-2, p.183-213, January 2007
Mauricio Ayala-Rincn , Ricardo P. Jacobi , Luis G. A. Carvalho , Carlos H. Llanos , Reiner W. Hartenstein, Modeling and prototyping dynamically reconfigurable systems for efficient computation of dynamic programming methods by rewriting-logic, Proceedings of the 17th symposium on Integrated circuits and system design, September 07-11, 2004, Pernambuco, Brazil
Rzvan Diaconescu, Institution-independent Ultraproducts, Fundamenta Informaticae, v.55 n.3-4, p.321-348, August
Razvan Diaconescu, Institution-independent ultraproducts, Fundamenta Informaticae, v.55 n.3-4, p.321-348, June
Rzvan Diaconescu, Interpolation in Grothendieck institutions, Theoretical Computer Science, v.311 n.1-3, p.439-461, 23 January 2004
M. Ayala-Rincn , C. H. Llanos , R. P. Jacobi , R. W. Hartenstein, Prototyping time- and space-efficient computations of algebraic operations over dynamically reconfigurable systems modeled by rewriting-logic, ACM Transactions on Design Automation of Electronic Systems (TODAES), v.11 n.2, p.251-281, April 2006
Francisco Durn , Jos Meseguer, Maude's module algebra, Science of Computer Programming, v.66 n.2, p.125-153, April, 2007
Narciso Mart-Oliet , Jos Meseguer, Rewriting logic: roadmap and bibliography, Theoretical Computer Science, v.285 n.2, p.121-154, 28 August 2002 | behavioural specification;institutions;CafeOBJ;algebraic specification |
633574 | Lower bounds for the rate of convergence in nonparametric pattern recognition. | We show that there exist individual lower bounds corresponding to the upper bounds for the rate of convergence of nonparametric pattern recognition which are arbitrarily close to Yang's minimax lower bounds, for certain "cubic" classes of regression functions used by Stone and others. The rates are equal to the ones of the corresponding regression function estimation problem. Thus for these classes classification is not easier than regression function estimation. | Introduction
),. be independent identically distributed R d f0; 1g-valued
random variables. In pattern recognition (or classication) one wishes to decide whether the
value of Y (the label) is 0 or 1 given the (d-dimensional) value of X (the observation), that
is, one wants to nd a decision function g dened on the range of X taking values 0 or 1 so
that g(X) equals to Y with high probability. Assume that the main aim of the analysis is
to minimize the probability of error :
xg be the a posteriori probability (or regression)
function. Introduce the Bayes-decision
and let
be the Bayes-error. Denote the distribution of X by . Introduce kfk q
R
d) 1=q .
It is well-known (see Devroye, Gyor and Lugosi [8]), that for each measurable function
the relation
Z 1 I fg 6=g g d (2)
holds, where I A denotes the indicator function of the event A. Therefore the function g
achieves the minimum in (1) and the minimum is L .
In the classication problem we consider here, the distribution of (X; Y ) (and therefore
also and g ) is unknown. Given only the independent sample D
of the distribution of (X; Y ), one wants to construct a decision rule
(R d f0; 1g) n 7! f0; 1g
such that
is close to L . In this paper we study asymptotic properties of EL n L .
If we have an estimate n of the regression function and we derive a plug-in rule g n
from n quite naturally by
then from (2) we get easily
(see [8]). This shows that if k in the same sense, and the
latter has at least the same rate, that is, in a sense, classication is not more complex than
regression function estimation.
It is well-known, that there exist regression function estimates, and so classication rules,
which are universally consistent, that is, which satisfy
for all distributions of (X; Y ). This was rst shown in Stone [14] for nearest neighbor
estimates (see also [8] for a list of references).
Classication is actually easier than regression function estimation in the sense that if
then for the plug-in rule
(see [8] Chapter 6), that is, the relative expected error of g n decreases faster than the expected
error of n . Moreover, if k then for the plug-in rule
(see Antos [1]), that is, the relation also holds for strong consistency. However the value of
the ratio above cannot be universally bounded, the convergence can be arbitrary slow. It
depends on the behavior of near 1=2 and the rate of convergence of f n g.
Unfortunately, there do not exist rules for which EL n L tends to zero with a guaranteed
rate of convergence for all distributions of (X; Y ). Theorem 7.2 and Problem 7.2 in Devroye
et al. [8] imply the following slow-rate-of-convergence result: Let fa n g be a positive sequence
converging to zero with 1=16 a 1 a :. For every sequence fg n g of decision rules,
there exists a distribution of (X; Y ), such that X is uniformly distributed on [0; 1], 2 f0; 1g
for all n.
Therefore, in order to obtain nontrivial rate-of-convergence results, one has to restrict
the class of distributions. Then it is natural to ask what the fastest achievable rate is for a
given class of distributions. This is usually done by considering minimax rate-of-convergence
results, where one derives lower bounds according to the following denition.
Denition 1 A positive sequence fa n g is called lower rate of convergence for a class
D of distributions of (X; Y ) if
lim sup
sup
(X;Y )2D
a n
Remark. In many cases the limit superior in this denition could be replaced by limit
inferior or inmum, because the lower bound for the minimax loss holds for all (su-ciently
large) n.
Since and determine the distribution of (X; Y ), the class D of distributions is often
given as a product of a class H of allowed distributions of X and a class F of allowed
regression functions. For example, H may consist of all absolute continuous distributions,
or all distributions (distribution-free approach), or one particular distribution (distribution-
sensitive approach).
In this paper we give lower bounds for some classes D in the last (and strongest) for-
mat, when H contains one (uniform) distribution, and class of
functions (with a parameter ), dened later. For minimax lower-rate results on other types
of distribution classes (e.g., Vapnik-Chervonenkis classes), see Devroye et al. [8] and the
references therein. For related results on the general minimax theory of statistical estimates
see Ibragimov and Khasmiskii [9], [10], [11], Korostelev and Tsybakov [12].
Yang [16] points out that while (3) holds for every xed distribution for which f n g is
consistent, the optimal rate of convergence for many usual classes is the same in classication
and regression function estimation. He shows many examples and some counterexamples to
this phenomenon with rates of convergence in terms of metric entropy. Classication seems
to have the same complexity as regression function estimation for classes which are rich near
1=2. (See also Mammen and Tsybakov [13].)
For example, it was shown in Yang [16] that for the distribution classes D above, the
optimal rate of convergence is fn
2+d g, the same as for regression function estimation (see
also Stone [15]).
In some sense, such lower bounds are not satisfactory. They do not tell us anything about
the way the probability of error decreases as the sample size is increased for a given classi-
cation problem. These bounds, for each n, give information about the maximal probability
of error within the class, but not about the behavior of the probability of error for a single
xed distribution as the sample size n increases. In other words, the \bad" distribution,
causing the largest probability of error for a decision rule, may be dierent for each n. For
example, the previous lower bounds for the classes D does not exclude the possibility that
there exists a sequence fg n g such that for every distribution in D , the expected probability
of error EL n L decreases at an exponential rate in n.
In this paper, we are interested also in \individual" minimax lower bounds that describe
the behavior of the probability of error for a xed distribution (X; Y ) as the sample size n
grows.
Denition 2 A positive sequence fa n g is called individual lower rate of convergence
for a class D of distributions of (X; Y ) if
fgng
sup
(X;Y )2D
lim sup
a n
where the inmum is taken over all sequences fg n g of decision rules.
The concept of individual lower rate has been introduced in Birge [7] concerning density
estimation. For individual lower-rate results concerning pattern recognition see Antos and
We will show that for every sequence fb n g tending to zero, fb n n
2+d g is an individual
lower rate of convergence for the classes D . Hence there exist individual lower rates of
these classes, which are arbitrarily close to the optimal lower rates. These rates are the same
as the individual lower rates for the expected L 2 () error of regression function estimation
for these classes (see also Antos, Gyor and Kohler [4] and Antos [3]). Both for regression
function estimation and pattern recognition, the individual lower rates are optimal, hence
we extend Yang's observation for the individual rates for these classes.
Our results also imply that the ratio ELn L
can tend to zero arbitrary slowly (even
for a xed sequence f n g).
Next we give the denitions of the function classes, for which we derive lower rates of
convergence. Let and denote the L q norm regarding to
the Lebesgue-measure on R d by k k Lq () .
Denition 3 For given 1 q 1, R 2
be the class of functions f
such that for R
d ) with
kD
moreover, for every such
with
kD
where D
denotes the partial derivative with respect to
For q < 1, we assume R 1. For all the rate-of-convergence results below hold if
some of M 0 ,. ,MR 1 are innite, that is, if we omit some of the conditions of the rst kind.
These classes are generalizations of the Lipschitz classes of Example 2 in Yang [16], Antos,
Gyor and Kohler [4], Birge [7] and Stone [15], and also generalizations of a special case of
Example 4 in Yang [16] for polinomial modulus of continuity.
Denition 4 For given ~
and M > 0, let V
M) be the class of functions f
such that kfk1 M and for - > 0
where
R
R
r
and e i is the i th unit vector in R d .
These classes are just Example 5 in Yang [16]. We assume that M;M 0 > 1=2.
Denition 5 Denote one of the classes Lip ;d and V ;d by F . Let D be the class of
distributions of (X; Y ) such that
(i) X is uniformly distributed on [0; 1] d ,
It is well-known, that there exist regression function estimates f n g, which satisfy
lim sup
sup
(X;Y )2D
+d
(see, e.g., [16] and Barron, Birge and Massart [6]). (This remains true replacing condition (i)
in Denition 5, e.g., with the assumption that is in a class of distributions with uniformly
bounded density funtions. Note that the rate depends only on and d.) Thus for the plug-in
rules fg n g
lim sup
sup
(X;Y )2D
2+d
that is, sup (X;Y )2D (EL n L
2+d
. (The rate n
2+d might be formulated as
d=, the exponent of dimension is just the exponent of 1= in the -metric
entropy of the classes, see [7].)
To handle the two types of classes together, let ~ for the
classes Lip ;d . Thus
holds in all cases. (We may call as the exponent of global smoothness following [7].)
Now we give conditions for a general function class F , which assure lower rates of
convergence for the corresponding distribution class D . A subclass of regression functions
will be indexed by the vectors
of +1 or 1 components, where are introduced below. Denote the set of all such
vectors by C. Let be the uniform distribution on [0; 1] d .
Assumption 1 With some ~
and satisfying (6), for any probability distribution fp j g
for every j 2 N , there is a function with support on the brick I
, such that
these are disjoint even for dierent j and j 0 , and for every c 2 C,
(c)
(which are independent of fp j g and j).
Note that
by (6).
Theorem 1 If Assumption 1 holds for a function class F , then the sequence
a
+d
is a lower rate of convergence for the corresponding distribution class D .
Assumption 1 is similar to A'2(k) in Birge [7]. In this form, it will be required only for
the individual lower rates. It seems from the proof (choice of fp j g) that for the minimax
rates above, we only use the weaker form:
Assumption 2 With some ~
and satisfying (6), for any 2 (0; 1], there is a function
with support on the brick I = I
in [0; 1] d , such that for every
(c)
r
Also km k 2
This is almost the same as Assumption 3 in Yang [16] and A2(k) in Birge [7] in case of
polinomial metric entropy, and Theorem 1 is analogous to Yang's Theorem 2. Both Yang's
and our theorems give as a special case:
Corollary 1 If F is Lip ;d or V ;d , then the sequence
a
2+d
is a lower rate of convergence for the class D .
Our main result is the following extention of Theorem 1 to individual lower rates (see
Antos [2] for Lip ;d if
Theorem 2 Let fb n g be an arbitrary positive sequence tending to zero. If Assumption 1
holds for a function class F , then the sequence
b n a
2+d
is an individual lower rate of convergence for the corresponding distribution class D .
Remark 1. Applying for the sequence f
b n g, Theorem 2 implies that for all fg n g there is
such that
lim sup
b n a n
Remark 2. Certainly Theorems 1 and 2 hold if we increase the class by leaving condition (i)
from Denition 5.
Focusing again on the classes Lip ;d and
Corollary 2 Let fb n g be an arbitrary positive sequence tending to zero. If F is Lip ;d or
then the sequence
b n a
2+d
is an individual lower rate of convergence for the class D .
Call a sequence fc n g an upper rate of convergence for a class D, if there exist rules fg n g
which satisfy
lim sup
sup
(X;Y )2D
that is, sup (X;Y )2D (EL n L it an individual upper rate of convergence
for a class D, if there exist rules fg n g which satisfy
sup
(X;Y )2D
lim sup
This implies only that for every distribution in D, EL n L possibly with dierent
constants. Then (5) implies that n
2+d is an upper rate of convergence, and thus also an
individual upper rate of convergence for D . While Theorem 1 shows only that there is no
upper rate of convergence for D better than n
, it follows from Theorem 2 that n
d
is even the optimal individual upper rate for D in the sense, that there doesn't exist an
individual upper rate c n of convergence for D , which satises
lim
Moreover (3) and (4) imply
fgng
sup
(X;Y )2D
lim sup
which shows that Theorem 2 cannot be improved by dropping b n . This shows the strange
nature of individual lower bounds, that while every sequence tending to zero faster than
2+d is an individual lower rate for D , n
2+d itself is not that.
3 Proofs
The proofs of the theorems apply the following lemma:
l-dimensional real vector taking values in [ 1=4; 1=4] l ,
let C be a zero mean random variable taking values in f1;+1g, and let Y l be inde-
pedent binary variables given C with
Then for the error probability of the Bayes decision for C based on ~
Proof. The Bayes decision is 1 if
Y
Y
(see [8]). One can verify that
where
Y
I fY i =1g +2
I fY i =0g
Y
where
For arbitrary 0 < q < 1=2, q if and only if j log T j log 1 q
By Markov's inequality
log 1 q
Moreover because of j log T
, we get
log T
Using the inequality for 1=4 x 1=4
on the one hand
and on the other hand
so
Hence
log T jg
Thus
Efg q@ 1 Efj log T jg
log 1 q
qA q@ 1q P
log 1 q
By choosing
Proof of Theorem 1. The method of this proof diers from that of Yang, it can be easily
modied to individual lower bound in Theorem 2. Assumption 1 and 0 (c) 1 imply
that each distribution (X; Y ) with X Unif[0; 1] d , Y 2 f0; 1g and EfY
for all x 2 [0; 1] d for some c 2 C is contained in D , which implies
lim sup
gn
sup
(X;Y )2D
a n
lim sup
sup
(X;Y ):XUnif[0;1] d ;EfY jX=xg= (c) (x);c2C
a n
Let g n be an arbitrary rule. By denition, fI A j;k =2 : j; kg is an orthogonal system in
for the measure
R
A
therefore the projection ^
of g
is given
by
I A j;k (x)
where
R (g n 1=2)I A j;k =2 d
R
I A j;k =2
d
R
A j;k
R
A j;k
R
A j;k
R
A j;k
R
A j;k
R
A j;k
(Note that ^
arbitrary. Note that g = 1+
I A j;k. Then
by (2)
Z (c) 1 I fgn 6=g g d
Z
d
Z
d
KX
Let ~
c n;j;k be otherwise. Because of j^c n;j;k c j;k j I f~c n;j;k 6=c j;k g , we get
I f~c n;j;k 6=c j;k g p +d
This proves
where
Equations (8) and imply
lim sup
gn
sup
(X;Y )2D (;M)
a n
K
gn
sup
a n
To bound the last term, we x the rules fg n g and choose c 2 C randomly. Let
be a sequence of independent identically distributed random variables independent of X 1 ,
. , which satisfy PfC 1=2. Next we derive a lower bound
for
The sign ~ c n;j;k can be interpreted as a decision on C j;k using D n . Its error probability is
minimal for the Bayes decision
C n;j;k , which is 1 if PfC
therefore
l be those X i 2 A j;k . Then given
distributed as
in the conditions of Lemma 1 with u
depends only on C n fC j;k g and on X r 's with r 62 fi therefore is independent of
Now conditioning on X the error of the conditional Bayes
decision for C j;k based on (Y depends only on (Y
Pf
By Jensen-inequality
Pf
1e 10E
K1np 2+d
independently of k. Thus if np 2+d
Pf
and
where
2+d for j n 1
2+d ,
2+d cn +1
so
lim sup
sup
a n
lim sup
a n
This together with (11) implies the assertion. 2
Proof of Theorem 2. We use the notations and results of the proof of Theorem 1. Now
we have by
fgng
sup
(X;Y )2D
lim sup
b n a n
K
fgng
sup
lim sup
b n a n
In this case we have to choose fp j g independently from n. Since b n and a n tend to zero, we
can take a subsequence fn t g t2N of fng n2N with b n t 2 t and a 1=
that
a 1=
and choose fp j g as
where q t is repeated 2 t =q t times. So
t:d2 t a 1=
a 1=
t:an t an
a 1=
a 1=
by a 1=
specially for
ER ns (C) K 3
ts
b ns a ns : (15)
We nish the proof in the spirit of Lemma 1 in Antos and Lugosi [5]. Using (15) one gets
fgng
sup
lim sup
b n a n
fgng
sup
lim sup
R ns (c)
b ns a ns
K 3
fgng
sup
lim sup
R ns (c)
ER ns (C)
K 3
fgng
lim sup
R ns (C)
ER ns (C)
Because of (12) and the fact that for all c 2 C
the sequence fR ns (C)=ER ns (C)g is uniformly bounded, so we can apply Fatou's lemma to
get
fgng
sup
lim sup
b n a n
K 3
fgng
lim sup
R ns (C)
ER ns (C)
This together with (14) implies the assertion. 2
Proof of Corollary 1 and 2. We must prove that the classes Lip ;d and V ;d satisfy
Assumption 1. The parameters ~ and are given as in the text satisfying (6). For any fp j g,
we give the required functions m j and sets A j;k .
First we pack the disjoint sets A j;k into [0; 1] d in the following way: Assume for simplicity,
that not, then the index of the minimal i takes the role of the rst
dimension in the construction below.) For a given fp j g, let fB j g be a partition of [0; 1] such
that B j is an interval of length p j . We pack disjoint translates of I j into the brick
This gives
Y
and since, for x 1, bxc x=2 and p j
d
Y
d
Y
where
Let be the uniform distribution on [0; 1] d and Choose a function
1=4] such that
(I) the support of m is a subset of [0; 1] d ,
denotes a diagonal matrix. Thus m j is a contraction of m from [0; 1] d to
I j . Now we have km
2 and
We only have to check that because of (III), for every c 2 C,
(c)
Case F
Note that the functions m j;k have disjoint support, and this holds also for their derivatives.
If q < 1, then for R
q
thus
For
with
kD
(c) k q
c j;k D
kD
Moreover, for every
with
kD
(c)
(c) (x)k Lq
c j;k - D
c j;k - D
c j;k - D
We can choose m such that its support is in [1=3; 2=3] d . Now if k-k < p j =3 then the support
of - D
m j;k is in A j;k , hence they are disjoint. Thus for the rst term, with M
c j;k - D
For the second term,
c j;k - D
c j;k D
c j;k D
c j;k D
c j;k D
c j;k D
kD
This gives
kD
(c)
If . For R
with P d
kD
c j;k D
kD
Moreover, for every - 2 R d , introduce x 0 and x 00 as a function of x the following way: If x
and x+ - fall in the same A j;k , then let x dierent bricks,
then consider the segment x(x + -), and let x 0 and x 00 be the intersections of this segment
and the borders of the bricks x and x vanishing
outside A j;k , any of its partial derivatives (up to order R) is zero on the border of A j;k , thus
in both case D
(c)
(c)
with
kD
(c)
(c) (x)k 1
(c)
(c)
(c)
(c) (x)k 1
kD
(c)
(c) (x 00
(c)
(c)
Using the fact that x and x 0 are in the same brick, and so the support of D
j;k (x) is in A j;k , for the second term
kD
(c)
(c)
c j;k (D
kp R
(D
sup
We get the same bound similarly for the rst term, which gives
kD
(c)
(See also [15], p. 1045.)
Case F
Now k (c) k1
c j;k R i
c j;k R i
c j;k R i
We can choose m such that its support is in [1=2; 1] d . Now, if R i - <
then the support
of R i
i;- m j;k is in A j;k , hence they are disjoint. For the rst term, with M
c j;k R i
For the second term,
c j;k R i
c j;k
r
r
(R
r
(R
r
(R
This gives
--R
Lower bounds on the rate of convergence of nonparametric pattern recog- nition
Performance limits of nonparametric estimators.
Strong minimax lower bounds for learning.
On nonparametric estimation of regression.
Statistical Estimation: Asymptotic Theory.
On the bounds for quality of nonparametric
Minimax Theory of Image Reconstruction.
Smooth discrimination analysis.
Consistent nonparametric regression.
Optimal global rates of convergence for nonparametric regression.
Minimax nonparametric classi
--TR
Characterizing rational versus exponential learning curves
Strong Minimax Lower Bounds for Learning
MiniMax Methods for Image Reconstruction
Lower Bounds on the Rate of Convergence of Nonparametric Pattern Recognition | nonparametric pattern recognition;individual rates of convergence |
633577 | A geometric approach to leveraging weak learners. | AdaBoost is a popular and effective leveraging procedure for improving the hypotheses generated by weak learning algorithms. AdaBoost and many other leveraging algorithms can be viewed as performing a constrained gradient descent over a potential function. At each iteration the distribution over the sample given to the weak learner is proportional to the direction of steepest descent. We introduce a new leveraging algorithm based on a natural potential function. For this potential function, the direction of steepest descent can have negative components. Therefore, we provide two techniques for obtaining suitable distributions from these directions of steepest descent. The resulting algorithms have bounds that are incomparable to AdaBoost's. The analysis suggests that our algorithm is likely to perform better than AdaBoost on noisy data and with weak learners returning low confidence hypotheses. Modest experiments confirm that our algorithm can perform better than AdaBoost in these situations. | Introduction
Algorithms like AdaBoost [7] that are able to improve the hypotheses generated
by weak learning methods have great potential and practical benefits. We call
any such algorithm a leveraging algorithm, as it leverages the weak learning
method. Other examples of leveraging algorithms include bagging [3], arc-x4 [5],
and LogitBoost [8].
One class of leveraging algorithms follows the following template to construct
master hypotheses from a given sample
The leveraging algorithm begins with a default master hypothesis H 0
and then for
- Constructs a distribution D t over the sample (as a function of the
sample and the current master hypothesis H possibly t).
- Trains a weak learner using distribution D t over the sample to obtain
a weak hypothesis h t .
Picks ff t and creates the new master hypothesis,
authors were supported by NSF Grant CCR 9700201.
This is essentially the Arcing paradigm introduced by Breiman [5, 4] and the
skeleton of AdaBoost and other boost-by-resampling algorithms [6, 7]. Although
leveraging algorithms include arcing algorithms following this template, leveraging
algorithms are more general. In Section 2, we introduce the GeoLev algorithm
that changes the examples in the sample as well as the distribution over them.
In this paper we consider 2-class classification problems where each y
+1g. However, following Schapire and Singer [15], we allow the weak learner's
hypotheses to be "confidence rated," mapping the domain X to the real num-
bers. The sign of these numbers gives the predicted label, and the magnitude is a
measure of confidence. The master hypotheses produced by the above template
are interpreted in the same way.
Although the underlying goal is to produce hypotheses that generalize well,
we focus on how quickly the leveraging algorithm decreases the sample error.
There are a variety of results bounding the generalization error in terms of the
performance on the sample [16].
Given a sample the margin of a hypothesis h on instance
x i is y i h(x i ) and the margin of h on the entire sample is the vector
hypothesis that correctly labels the sample has a margin
vector whose components are all positive. Focusing on these margin vectors
provides a geometric intuition about the leveraging problem.
In particular, a potential function on margin space can be used to guide the
choices of D t and ff t . The distribution D t is the direction of steepest descent
and ff t is the value that minimizes the potential of H Leveraging
algorithms that can be viewed in this way perform a feasible direction descent
on the potential function. An amortized analysis using this potential function can
often be used to bound the number of iterations required to achieve zero sample
error. These potential functions give insight into the strengths and weaknesses
of various leveraging algorithms.
Boosting algorithms have the property that they can convert weak PAC
learning algorithms into strong PAC learning algorithms. Although the theory
behind the Adaboost algorithm is very elegant, it leads to the somewhat intriguing
result that minimizing the normalization factor of a distribution will
reduce the training error [14, 15]. Our search for a better understanding of how
AdaBoost reduces the sample error led to our geometric algorithms, GeoLev and
GeoArc. Although the performance bounds for these algorithms are too poor to
show that they have the boosting property, these bounds are incomparable to
AdaBoost's in that they are better when the weak hypotheses contain mostly
low-confidence predictions.
The main contributions of this paper are as follows:
We use a natural potential function to derive a new algorithm for leveraging
learners, called GeoLev (for Geometric Leveraging algorithm).
We highlight the relationship between AdaBoost, Arcing and feasible direction
linear programming [10].
- We use our geometric interpretation to prove convergence bounds on the
algorithm GeoLev. These bound the number of iterations taken by GeoLev
to achieve ffl classification error on the training set.
We provide a general transformation from GeoLev to an arcing algorithm
GeoArc, for which the same bounds hold.
We summarize some preliminary experiments with GeoLev and GeoArc.
We motivate a novel algorithm, GeoLev, by considering the geometry of "margin
space." Since many empirical and analytical results show that good margins on
the sample lead to small generalization error [2, 16], it is natural to seek a master
hypothesis with large margins. One heuristic is to seek a margin vector with
uniformly large margins, i.e. a vector parallel to 1). This indicates
that the master hypothesis is correct and equally confident on every instance in
the sample. The GeoLev algorithm exploits this heuristic by attempting to find
hypotheses whose margin vectors are as close as possible to the 1 direction.
We now focus on a single iteration of the leveraging process, dropping the
time subscripts. Margin vectors will be printed in bold face and often normalized
to have Euclidean length one. Thus H is the margin vector of the master
hypothesis H , whose i th component is
Let the goal vector,
p m), be 1 normalized to length one.
Recall that m is the sample size, so all margin vectors lie in ! m , and normalized
margin vectors lie on the m dimensional unit sphere. Note that it is easy to
re-scale the confidences - multiplying the predictions of any hypothesis H by a
constant does not change the direction of H's margin vector. Therefore we can
assume the appropriate normalization without loss of generality.
The first decision taken by the leverager is what distribution D to place on
the sample. Since distribution D has m components, it can also be viewed as a
(non-negative) vector in ! m .
The situation in margin-space at the start of the iteration is shown in Figure
1. In order to decrease the angle ' between H and g we must move the head
of H towards g. All vectors at angle ' to the goal vector g lie on a cone, and
their normalizations lie on the "rim" shown in the figure.
If h, the weak hypothesis's margin vector (which need not have unit length),
is parallel to H or tangent to the "rim", then no addition of h to H can decrease
the angle to g. On the other hand, if the line H cuts through the cone,
then the angle to the goal vector g can be reduced by adding some multiple of
h to H. The only time the angle to g cannot be decreased is when the h vector
lies in the plane P which is tangent to the cone and contains the vector H, as
shown in Figure 2.
theta
Fig. 1. Situation in margin space at the start of an iteration.
If the weak learner learns at all, then its hypothesis h is better than random
guessing, so the learners "edge", E i-D (y i h(x i )), will be positive. This means
that D \Delta h is positive, and if distribution D (viewed as a margin vector) is
perpendicular to plane P then h lies above P . Therefore the leverager is able to
use h to reduce the angle between H and g.
As suggested by the figures, the appropriate direction for D is
In general neither jjDjj
If all components of D are positive, it can be normalized to yield a distribution
on the sample for the weak learner. However, it is possible for some
components of D to be negative. In this case things are more complicated 1 . If a
component of D is negative, then we flip both the sign of that component and
the sign of the corresponding label in the sample. This creates a new direction
D 0 which can be normalized to a distribution D 0 and a new sample S 0 with the
same x i 's but (possibly) new labels y 0
. The modified sample S 0 and distribution
D 0 are then used to generate a new weak hypothesis, h. Let h 0 be the margins
of h on the modified sample S 0 , so h 0
In fact it is this complication which differentiates GeoLev from Arcing algorithms.
Arcing algorithms are not permitted to change the sample in this way. A second
transformation avoiding the label flipping is discussed in section 5.
Fig. 2. The direction D for the distribution used by GeoLev.
as the sign flips cancel.
The second decision taken by the algorithm is how to incorporate the weak
hypothesis h into its master hypothesis H . Any weak hypothesis with an "edge"
on the distribution D described above can be used to decrease '. Our goal is
to find the coefficient ff so that H
jjH+ffhjj2 decreases this angle as much as
possible. Taking derivatives shows that ' is minimized when
From this discussion we can see that GeoLev performs a kind of gradient
descent. If we consider the angle between g and the current H as a potential on
margin space, then D is the direction of steepest descent. Moving in a direction
that approximates this gradient takes us towards the goal vector. Since we have
only little control over the hypotheses returned by the weak learner, an approximation
to this direction is the best we can do. The step size is chosen adaptively
to make as much use of the weak hypothesis as possible.
The GeoLev Algorithm is summarized in Figure 3.
3 Relation to Previous Work
Breiman [5, 4] defines arcing algorithms using potential functions that can be expressed
as component-wise functions of the margins having the form 2
Breiman allows the component-wise potential f to depend on the sum of the ff i 's in
some arcing algorithms.
Input: A sample
a weak learning algorithm.
Initialize master hypothesis H to predict 0 everywhere
m)
Repeat:
do
if
add
else
add
do
call weak learner with distribution D over S 0 , obtaining hypothesis h
Fig. 3. The GeoLev Algorithm.
Breiman shows that, under certain conditions on f , arcing algorithms converge
to good hypotheses in the limit. Furthermore, he shows that AdaBoost is an arcing
algorithm with is an arcing algorithm with polynomial
f(x).
For completeness, we describe the AdaBoost algorithm and show in our notation
how it is performing feasible direction gradient descent on the potential
function
AdaBoost fits the template outlined in the introduction,
choosing the distribution
Z
where Z is the normalizing factor so that D sums to 1. The master hypothesis
is updated by adding in a multiple of the new weak hypothesis. The coefficient
ff is chosen to minimize
exp
the next iteration's Z value. Unlike GeoBoost, the margin vectors of AdaBoost's
hypotheses are not normalized.
We now show that AdaBoost can be viewed as minimizing the potential
by approximate gradient descent. The direction of steepest descent (w.r.t. the
components of the margin vector) is proportional to (5), the distribution AdaBoost
gives to the weak learner.
Continuing the analogy, the coefficient ff given to the new hypothesis should
minimize the potential, X
of the updated master hypothesis, which is identical to (6). Thus AdaBoost's
behavior is approximate gradient descent of the function defined in (7), where the
direction of descent is the weak learner's hypothesis. Furthermore, the bounds on
AdaBoost's performance proven by Schapire and Singer are implicitly performing
an amortized analysis over the potential function (8).
Arc-x4 also fits the template outlined in the introduction, keeping an unnormalized
master hypothesis. In our notation the distribution chosen at trial t is
proportional to
This algorithm can also be viewed as a gradient descent on the potential function
at the t th iteration. Rather than computing the coefficient ff as a function of
the weak hypothesis, arc-x4 always chooses ff = 1. Thus each h t has weight 1=t
in the master hypothesis, as in many gradient descent methods. Unfortunately,
the dependence of the potential function on t makes it difficult to use in an
amortized analysis.
This connection to gradient descent was hinted at by Freund [6] and noted by
Breiman and others [4, 8, 13]. Our interpretation generalizes the previous work
by relaxing the constraints on the potential function. In particular, we show how
to construct algorithms from potential functions where the direction of steepest
descent can have negative components. The potential function view of leveraging
algorithms shows their relationship to feasible descent linear programming, and
this relationship provides insight into the role of the weak learner.
Feasible direction methods try to move in the direction of steepest descent.
However, they must remain in the feasible region described by the constraints. A
descent direction is chosen that is closest to the (negative) gradient \Gammarf while
satisfying the constraints. For example, in a simplified Zoutendijk method, the
chosen direction d satisfies the constraints and maximizes \Gammarf \Delta d . Similarly, the
leveraging algorithms discussed are constrained to produce master hypotheses
lying in the span of the weak learner's hypothesis class. One can view the role
of the weak learner as finding a feasible direction close to the given distribution
(or negative gradient). In fact the weak learning assumption used in boosting
and in the analysis of GeoLev implies that there is always a feasible direction d
such that \Gammarf \Delta d is bounded above zero.
The gradient descent framework outlined above provides a method for deriving
the corresponding leveraging algorithm from smooth potential functions
over margin space.
The potential functions used by AdaBoost and arc-x4 have the advantage
that all the components of their gradients D are positive, and thus it is easy
to convert D into a distribution. On the other hand, the methods outlined in
the previous section and section 5 can be used to handle gradients with negative
components. The approach used by R-atsch et al. [13] can similarly be interpreted
as a potential function of the margins.
Recently, Friedman et al. [8] have given a maximum likelihood motivation
for AdaBoost, and introduced another leveraging algorithm based on the log-likelihood
criteria. They indicate that minimizing the square loss potential,
performed less well in experiments than other monotone potentials, and
conjecture that its non-monotonicity (penalizing margins greater than 1) is a
contributing factor. Our methods described in section 5 may provide a way to
ameliorate this problem.
Convergence Bound
In this section we examine the number of iterations required by GeoLev to
achieve classification error ffl on the sample. The key step shows how the sine
of the angle between the goal vector g and the master hypothesis H is reduced
each iteration. Upper bounding the resulting recurrence gives a bound on how
rapidly the training error decreases.
We begin by considering a single boosting iteration. The margin space quantities
are as previously defined (recall that g and H are 2-normed,
while D and h are not). In addition, let H 0 denote the new master hypothesis
at the end of the iteration, and ' 0 the angle between H 0 and g. We assume
throughout that the sample is finite.
(D \Delta h) to be the edge of the weak learner's hypothesis h with
respect to the distribution given to the weak learner. Our bound on the decrease
in ' will depend on h only through r and jjhjj 2 . Note that r was chosen to
maintain consistency with the work of Schapire and Singer [15] and that
At the start of the iteration
and at
the end of the iteration sin(' 0
Recall that
H 0 is H+ ffh normalized, and since H already has unit length,
Lemma 1. The value cos 2 (' 0 ) is maximized (and sin(' 0 ) minimized) when
(g
Proof The lemma follows from examination of the first and second derivatives
of cos 2 (' 0 ) with respect to ff. 2
Using this value of ff a little algebra shows that
Although we desire bounds that hold for all h, we find it convenient to first
minimize (15) with respect to (H \Delta h). The remaining dependence on h will be
expressed as a function of r and jjhjj 2 in the final bound.
Lemma 2. Equation (15) is minimized when (H
Proof Again the lemma follows after examining the first and second derivatives
with respect to (H \Delta h). 2
This considerably simplifies (15), yielding
(D
Recall that
(D
Therefore,
We will bound this in two ways, using different bounds on jjDjj 1 . The first
of these bounds is derived by noting that jjDjj 1 - jjDjj 2 . Recall that
('). Combining this with (18)
and the bound on jjDjj 1 yields
sin
s
jjhjj 2: (20)
Repeated application of this bound yields the following theorem.
Theorem 1. If r are the edges of the weak learner's hypotheses during
the first T iterations, then the sine of the angle between g and the margin vector
for the master hypothesis computed at iteration T is at most
Y
s
We can bound jjDjj 1 another way to obtain a a bound which is often better.
Note that jjDjj 1 - (D \Delta
Substituting this
into (18) and continuing as before yields
sin
Continuing as above results in the following theorem.
Theorem 2. Let r be the edges of the weak learner's hypotheses and
be the angles between g and the margins of the master hypotheses at
the start of the first T iterations. If ' T+1 is the angle between g and the margins
of the master hypothesis produced at iteration T then
Y
s
To relate these results to the sample error we use the following lemma.
Lemma 3. If sin(') !
is the angle between g and a master hypothesis
H, then the sample error of H is less than ffl.
Proof Assume sin(') !
R=m, so
is 2-normed, this can only hold if H has more than m
positive components. Therefore the master hypothesis correctly classifies more
examples and the sample error rate is at most (R \Gamma 1)=m. 2
Combining Lemma 3 and Theorem 2 gives the following corollary.
Corollary 1. After iteration T , the sample error rate of GeoLev's master hypothesis
is bounded by
Y
The recurrence of Theorem 2 is somewhat difficult to analyze, but we can
apply the following lemma from Abe et al. [1].
Lemma 4. Consider a sequence fg t g of non-negative numbers satisfying g t+1 -
positive constant. If f
c
all t 2 N .
Given a lower bound r on the r t values and an upper bound H 2 on jjh t jj 2 ,
then we can apply this lemma to recurrence (22). Setting
and
H 2m shows that
sin 2: (25)
This, and the previous results lead to the following theorem.
Theorem 3. If the weak learner always returns hypotheses with an edge greater
than r and H 2 is an upper bound on jjh t jj 2 , then GeoLev's hypothesis will have
at most ffl training error after
iterations.
Similar bounds have been obtained by Freund and Schapire [7] for AdaBoost.
Theorem 4. After T iterations, the sample error rate of AdaBoost's master
hypothesis is at most
Y
s
2: (27)
The dependence on jjhjj 1 is implicit in their bounds and and can be removed
when h t
Comparing Corollary 1 and Theorem 4 leads to the following observations.
First, the bound on GeoLev does not contain the square-root. If this were the
only difference, then it would correspond to a halving of the number of iterations
required to reach error rate ffl on the sample. This effect can be approximated by
a factor of 2 on the r 2 terms.
A more important difference is the factors multiplying the r 2 terms. With
the preceding approximation GeoLev's bound has 2m sin 2 (' t )=jjh t jj 2
aboost's bound has 1/jjh t jj 2
1 . The larger this factor the better the bound. The
dependence on sin 2 (' t ) means that GeoLev's progress tapers off as it approaches
zero sample error.
If the weak hypotheses are equally confident on all examples, then jjh t jj 2
2 is
times larger than jjh t jj 2
1 and the difference in factors is simply 2 sin 2 (' t ). At
the start of the boosting process ' t is close -=2 and GeoLev's factor is larger.
However, sin 2 (' t ) can be as small as 1=m before GeoLev predicts perfectly on the
sample. Thus GeoLev does not seem to gain as much from later iterations, and
this difficulty prevents us from showing that GeoLev is a boosting algorithm.
On the other hand, consider the less likely situation where the weak hypotheses
produce a confident prediction for only one sample point, and abstain
on the rest. Now jjh t jj 2
1 , and GeoLev's bound has an extra factor of
about 2m sin 2 (' t ). GeoLev's bounds are uniformly better 3 than AdaBoost's in
this case.
5 Conversion to an Arcing Algorithm
The GeoLev algorithm discussed so far does not fit the template for Arcing
algorithms because it modifies the labels in the sample given to the weak learner.
3 We must switch to recurrence (20) rather than recurrence (22) when sin 2 (' t ) is very
small.
This also breaks the boosting paradigm as the weak learner may be required to
produce a good hypothesis for data that is not consistent with any concept in
the underlying concept class. In this section we describe a generic conversion
that produces arcing algorithms from leveraging algorithms of this kind without
placing an additional burden on the weak learner. Throughout this section we
assume that the weak learner's hypotheses produce values in [\Gamma1; +1].
The conversion introduces an wrapper between the weak learner and leveraging
algorithm that replaces the sign-flip trick of section 2. This wrapper takes the
weighting D from the leveraging algorithm, and creates the distribution
by setting all negative components to zero and re-normalizing. This modified
distribution D 0 is then given to the weak learner, which returns a hypothesis h
with a margin vector h. The margin vector is modified by the wrapper before
being passed on to the leveraging algorithm: if D(x i ) is negative then h i is set
to \Gamma1. Thus the leveraging algorithm sees a modified margin vector h 0 which it
uses to compute ff and the margins of the new master hypothesis.
The intuition is that the leveraging algorithm is being fooled into thinking
that the weak hypothesis is wrong on parts of the sample when it is actually
correct. Therefore the margins of the master hypothesis are actually better than
those tracked by the leveraging algorithm. Furthermore, the apparent "edge" of
the weak learner can only be increased by this wrapping transformation. This
intuition is formalized in the following theorems.
Theorem 5. If
is the edge of the weak learner with
respect to the distribution it sees, and r
is the edge of the
modified weak hypothesis with respect to the (signed) weighting D requested by
the leveraging algorithm, then r 0 - r.
Proof
ensures that both D 0 otherwise, and
The assumption on h implies r - 1, so r 0 is minimized at
Theorem 6. No component of the master margin vector
t used by
the wrapped leveraging algorithm is ever greater than the actual margins of the
master hypothesis
Proof The theorem follows immediately by noting that each component of h 0
is no greater than the corresponding component of h t . 2
We call the wrapped version of GeoLev, GeoArc, as it is an Arcing algorithm.
It is instructive to examine the potential function associated with GeoArc:
ip
min
This potential has a similar form to the following potential function which is
zero on the entire positive orthant:
ip
The leveraging framework we have described together with this transformation
enables the analysis of some undifferentiable potential functions. The full
implications of this remain to be explored.
6 Preliminary Experiments
We performed experiments comparing GeoLev and GeoArc to AdaBoost on a
set of 13 datasets(the 2 class ones used in previous experiments) from the UCI
repository. These experiments were run along the same lines as those reported
by Quinlan [12]. We ran cross validation on the datasets for
two class classification. All leveraging algorithms ran for 25 iterations, and used
single node decision trees as implemented in MLC++ [9] for the weak hypotheses.
Note that these are \Sigma1 valued hypotheses, with large 2-norms. It was noticed
that the splitting criterion used for the single node had a large impact on the
results. Therefore, the results reported for each dataset are those for the better of
mutual information ratio and gain ratio. We report only a comparison between
AdaBoost and GeoLev, GeoArc performed comparably to GeoLev. The results
are illustrated in figure 4. This figure is a scatter plot of the generalization error
on each of the datasets. These results appear to indicate that the new algorithms
are comparable to AdaBoost.
Further experiments are clearly warranted and we are especially interested
in situations where the weak learner produces hypotheses with small 2-norm.
7 Conclusions and Directions for Further Study
We have presented the GeoLev and GeoArc algorithms which attempt to form
master hypotheses that are correct and equally confident over the sample. We
found it convenient to view these algorithms as performing a feasible direction
gradient descent constrained by the hypotheses produced by the weak learner.
The potential function used by GeoLev is not monotonic: its gradient can have
negative components. Therefore the direction of steepest descent cannot simply
be normalized to create a distribution for the weak learner.
We described two ways to solve this problem. The first constructing a modified
sample by flipping some of the labels. This solution is mildly unsatisfying as
it strengthens the requirements on the weak learner - the weak learner must now
deal with a broader class of possible targets. Therefore we also presented a second
transformation that does not increase the requirements on the weak learner.
In fact, using this second transformation can actually improve the efficiency of
GeoLev
AdaBoost
Fig. 4. Generalization error of GeoLev versus AdaBoost after 25 rounds.
the leveraging algorithm. One open issue is whether or not this improvement
can be exploited to improve GeoArc's performance bounds. A second open issue
is to determine the effectiveness of these transformations when applied to other
non-monotonic potential functions, such as those considered by Mason et al. [11].
We have upper bounded the sample error rate of the master hypotheses
produced by the GeoLev and GeoArc algorithms. These bounds are incomparable
with the analogous bounds for AdaBoost. The bounds indicate that Ge-
oLev/GeoArc may perform slightly better at the start of the leveraging process
and when the weak hypotheses contain many low-confidence predictions. On
the other hand, the bounds indicate that GeoLev/GeoArc may not exploit later
iterations as well, and may be less effective when the weak learner produces
valued hypotheses. These disadvantages make it unlikely that the GeoArc
algorithm has the boosting property.
One possible explanation is that GeoLev/GeoArc aim at a cone inscribed in
the positive orthant in margin space. As the sample size grows, the dimension of
the space increases and the volume of the cone becomes a diminishing fraction of
the positive orthant. AdaBoost's potential function appears better at navigating
into the "corners" of the positive orthant.
However, our preliminary tests indicate that after 25 iterations the generalization
errors of GeoArc/GeoLev are similar to AdaBoost's on 13 classification
datasets from the UCI repository. These comparisons used 1-node decision tree
classifiers as the weak learning method. It would be interesting to compare their
relative performances when using a weak learner that produces hypotheses with
many low-confidence predictions.
Acknowledgments
We would like to thank Manfred Warmuth,Robert Schapire, Yoav Freund, Arun
Jagota, Claudio Gentile and the EuroColt program committee for their useful
comments on the preliminary version of this paper.
--R
Polynomial learnability of probabilistic concepts with respect to the Kullback-Leibler divergence
A training algorithm for optimal margin classifiers.
Bagging predictors.
Arcing the edge.
Boosting a weak learning algorithm by majority.
A decision-theoretic generalization of on-line learning and an application to boosting
Additive logistic re- gression: a statistical view of boosting
Data mining using MLC
Improved generalization through explicit optimization of margins.
Bagging, boosting and c4.
margins for adaboost.
Boosting the margin: a new explanation for the effectiveness of voting methods.
Improved boosting algorithms using confidence-rated predictions
Estimation of Dependences Based on Empirical Data.
--TR
A theory of the learnable
What size net gives valid generalization?
Polynomial learnability of probabilistic concepts with respect to the Kullback-Leibler divergence
Equivalence of models for polynomial learnability
A training algorithm for optimal margin classifiers
The design and analysis of efficient learning algorithms
Learning Boolean formulas
An introduction to computational learning theory
Boosting a weak learning algorithm by majority
Bagging predictors
Exponentiated gradient versus gradient descent for linear predictors
A decision-theoretic generalization of on-line learning and an application to boosting
General convergence results for linear discriminant updates
An adaptive version of the boost by majority algorithm
Drifting games
Additive models, boosting, and inference for generalized divergences
Boosting as entropy projection
Prediction games and arcing algorithms
Improved Boosting Algorithms Using Confidence-rated Predictions
An Empirical Comparison of Voting Classification Algorithms
Margin Distribution Bounds on Generalization | classification;boosting;learning;gradient descent;ensemble methods |
633623 | Phase transition for parking blocks, Brownian excursion and coalescence. | In this paper, we consider hashing with linear probing for a hashing table with m places, n items (n > m), and places. For a noncomputer science-minded reader, we shall use the metaphore of n cars parking on m places: each car ci chooses a place pi at random, and if pi is occupied, ci tries successively finds an empty place. Pittel [42] proves that when /m goes to some positive limit > 1, the size B1m,1 of the largest block of consecutive cars satisfies 2( converges weakly to an extreme-value distribution. In this paper we examine at which level for n a phase transition occurs between o(m). The intermediate case reveals an interesting behavior of sizes of blocks, related to the standard additive coalescent in the same way as the sizes of connected components of the random graph are related to the multiplicative coalescent. | Introduction
We consider hashing with linear probing for a hashing table with n places f1; 2; :::; ng,
items places. Hashing with
linear probing is a fundamental object in analysis of algorithms: its study goes back
to the 1960's (Knuth [17], or Konheim & Weiss [19]) and is still active (Pittel [27],
Knuth [18], or Flajolet et al. [11]). For a non computer science-minded reader,
we shall use, all along the paper, the metaphore of m(n) cars parking on n places,
leaving E(n) places empty: each car c i chooses a place p i at random, and if p i
is occupied, c i tries successively nds an empty place. We
use the convention that place place 1. Under the name of parking
function, hashing with linear probing has been and is still studied by combinatorists
(Schutzenberger [34], Riordan [30], Foata & Riordan [12], Francon [15] or Stanley
1 Institut Elie Cartan, INRIA, CNRS and Universite Henri Poincare,
BP 239, 54 506 Vandoeuvre Cedex, France.
[email protected]
Universite Libre de Bruxelles, Departement d'Informatique,
Campus Plaine, CP 212, Bvd du Triomphe, 1050 Bruxelles, Belgium.
[email protected]
[35, 36]). There is a nice development on the connections between parking functions
and many other combinatorial objects in Section 4 of [11] (see also [35]). In this
paper, and also in [20], we use mainly a { maybe less exploited { connection between
parking functions and empirical processes of mathematical statistics (see also the
recent paper [25]) .
Pittel [27] proves that when E(n)=n goes to some positive limit , the size of the
largest block of consecutive cars B (1)
converges weakly to an extreme-value distribution. This paper is
essentially concerned with what we would call the "emergence of a giant block" (see
[5, 7] for an historic of the emergence of the giant component of a random graph,
and also [2, 14, 16]):
Theorem 1.1 For n and m(n) going jointly to+1, we have:
in which B (1)
belongs almost surely to ]0; 1[.
So the threshold phenomenon is less pronounced than in the random graph pro-
cess. However, the behaviour during the transition is reminiscent of the random
graph process: while Aldous [2] observed a limiting behaviour of connected components
for the random graph process related to the multiplicative coalescent, here, it
rather seems that the additive coalescent (cf. Aldous & Pitman [4]) comes into play.
Theorem 1.3 describes the random variable B 1 ().
k1 be the decreasing sequence of sizes of blocks, ended by an
innite sequence of 0's. Dene analogously R n;n (R (k)
k1 as the sequence
of sizes of blocks when the blocks are sorted by increasing date of birth (in increasing
order of rst arrival of a car: for instance, on Figure 1, for
Theorem 1.2 If lim E(n)
The law of X() is characterized by the fact that (X 1 ()+X 2 ()+ ::: +X k ()) k1
is distributed as the sequence
k1
in which the N k are standard Gaussian and independent.
Figure
1: parking schemes for places.
One recognizes the marginal law of the -valued fragmentation process derived
from the continuum random tree, introduced by Aldous & Pitman in their study of
the standard additive coalescent [4]. In order to describe the limit of Bn
n , we dene
a family of operators on the space E of continuous nonnegative functions f(x),
y
Let e be a normalized Brownian excursion, that is a 3-Bessel bridge (see [29] Chap.
XI and XII for background). Let k1 be the sequence of widths
of excursions of e, sorted in decreasing order. By excursion of a function f , we
understand the restriction of the function f to an interval [a; b] in which f does
not have any change of sign, and more precisely, such that
b[. The Brownian motion is known to have innitely many
excursions in the neighborhood of any of its zeros. This property holds true for e,
with the exception of 0, that is a.s. an isolated point in the set of zeroes of e.
Nevertheless e has, almost surely, innitely many excursions in the interval [0; 1]
(it can be seen as a consequence of Theorem 4.1, or more generally, of Cameron-
Martin-Girsanov formula). We have:
Theorem 1.3 If lim E(n) p
Incidentally, the length L() of the excursion of e beginning at 0 is studied by
Bertoin [6] in a recent paper: he gives the transition kernel of the Markov process
Figure
2: parking schemes for places.
Theorem 1.1 is the width of the largest excursion of e. The
fact that B 1 () belongs almost surely to ]0; 1[, and has a density, follows from the
next Theorem about the Brownian excursion, that was in turn suggested by the
more combinatorial in nature Theorem 1.2:
Theorem 1.4 The law of the size-biased permutation Y () of B() satises
law
k1
in which the N k are standard Gaussian and independent.
The size-biased permutation of a random probability distribution such as B() is
constructed as follows. Consider a sequence of independent, positive, integer-valued
random variables distributed according to B():
With probability 1 each positive integer appears at least once in the sequence (I k ) k1 .
Erase each repetition after the rst occurence of a given integer in the sequence:
remains a random permutation ((k)) k1 of the positive integers. Set:
Size-biased permutations of random discrete probabilities have been studied by
Aldous [3] and Pitman, [23], [24]. The most celebrated example is the size-biased
permutation of the sequence of limit sizes of cycles of a random permutation. While
the limit distribution of the sizes of the largest, second largest . cycle have a complicated
expression (see Dickman [9], Shepp & Lloyd [32]), the size-biased permutation
k1
in which the U k are uniform on [0; 1] and independent. See also the beautiful developments
about Poisson-Dirichlet distributions, in [3], [21], [22], [24] and [26].
Actually, Theorem 1.4 gives a implicit description of the law of B(), for instance
it proves that almost surely each Y k () is positive, and thus a.s.
1. There exist even formulas, due to Perman [21], giving the joint distribution
of the k th largest weights of a random discrete probability in term of the joint
densities of its size-biased permutation, in the special case where the random discrete
probability comes from the order statistics for jumps of normalised subordinators
these formulas do not seem to apply here. Flajolet & Salvy [13] have a direct
approach, to the computation of the density of B 1 (), by methods based on Cauchy
coe-cient integrals to which the saddle point method is applied: the density is a
variant of the Dickman function. Note that Bertoin [6] studies the stochastic process
as a tool for the study of the excursions of the re
ected Brownian
motion with a varying drift. He establishes interesting properties of , for instance
Markov property.
Finally, set:
We have
Theorem is the stable subordinator with exponent 1=2,
meaning that, for any k and any k-tuple of positive numbers
Thus, as a stochastic process, S has the same law as the process of hitting times
of the Brownian motion. With Theorem 1.4, this is still another feature that Y ()
shares with the stochastic additive coalescent. Theorems 1.5 and 1.4 suggest that
the process (Y ()) 0 has the same law as the -valued fragmentation process,
providing an alternative construction of the stochastic additive coalescent. A formal
proof is out of the scope of this paper (however see the concluding remarks).
Theorem 1.5 has interest in itself, but it is is relevant to the study of parking
schemes only if we are able to prove weak convergence of the process "size of the
block containing car c 1 " to the process (Y 1 ()) 0 . The next Theorem lls incompletely
this gap. Let (R (1)
the sequence of successive widths R (1)
n;k of
the block containing car c 1 when cars are parked on n places, and k
places are still empty. If k n, set R (1)
R (1)
n;d
ne
We are able to build on the same space a Brownian excursion e and a sequence of
parking schemes of n cars on n places, in such a way that:
Theorem 1.6
Pr
We would need an almost sure convergence for the Skorohod topology, in order
to ll completely the gap. The fact that S() is a pure jump process makes sense in
the parking scheme context, since the block of car c 1 is known to increase by O(n)
while only O(
n) cars arrived: it can only be explained by coalescence with other
blocks of size O(n), that is, by instantaneous jumps.
The paper is organized as follows. Section 2 analyses the block containing a
given car or a given site, leading to a proof of Theorem 1.1 (i),(ii). Section 3
provides a combinatorial proof of Theorem 1.2. Section 4 uses a decomposition of
sample paths of e (cf. Theorem 4.1) which, we believe, has interest in itself, to
provide a simple proof of Theorems 1.4 and 1.5. Theorem 4.1 is proven at Section
5, with the help of the combinatorial identity (2.2). Proofs of Theorem 1.3 and 1.1
(iii) are also given in Section 5, where we exhibit a close coupling between empirical
processes of mathematical statistics and the prole associated with a parking scheme
(see previous gures, and for a denition of the prole see Section 5). Section 6 is
devoted to the proof of Theorem 1.6 . Section 7 concludes the paper.
2 On the block containing a given car, or a given site
In this Section,we give the proof of Theorem 1.1((i) and (ii)). In order to do that,
we give a partial proof of Theorem 1.2, concerning the size R (1)
n;E(n) of the block
containing car c 1 : we have
Theorem 2.1 If n 1=2 E(n) ! > 0,
R (1)
law
in which N is standard Gaussian.
Proof : The probability that, when parking m cars on n places, the block containing
car c 1 has k elements, denoted Pr(R (1)
Clearly, the number of parking schemes for m cars on n places is n m . One has to
choose the set of k 1 cars that belong to the same block as c 1 , giving the factor
the place where this block begins, giving the factor n, the way these k cars
are allocated on these k places, giving the factor nally one has to
park the m k remaining cars on the n k 2 remaining places, leaving one empty
place at the beginning and at the end of the block containing car c 1 , and this gives
the factor (n Flajolet et al. (1998) or Knuth (1998)
for justication of the third and of the last factor). Note that these computations
would hold for any given car instead of c 1 .
At the end of this Section, we shall prove that:
Lemma 2.2 For any 0 < < 1=2 there exists a constant C() such that,
whenever , simultaneously, k
, we have:
'(n; m; k)n f
Proof of Theorem 2.1. Owing to
Lemma 2.2 yields, for 0 < a < b < 1, that
Pr(an R (1)
a
Doing
Z +1e y=2 dy
Thus, is a density of probability, and R (1)
n;E(n) =n has for limit
law (x)dx. Furthermore, the previous change of variable entails that if some random
variable W has the density (x), then 2 W=(1 W ) has a
1=2;1=2 law, that
is, 2 W=(1 W ) has the same law as the square of a standard Gaussian random
variable. }
As a consequence of Theorem 2.1, we prove now (i) and (ii) of Theorem 1.1.
Considering (ii), provided that E(n)
n R (1)
n;d ne :
As
n), for any > 0 and for n large
n;d
ne < nx):
Due to Theorem 2.1, we obtain that for any > 0
lim sup
Clearly, for x < 1,
Pr
0:
As regards (i), let L n be the length of the block of cars containing
place 1 (resp. the length of the largest block) when car c bn p nc arrives. We
have, for k > 0,
Pr(R (1)
and place 1 is empty with probability:
ne
We have also
and thus
Assuming p
we obtain that for any ,
when n is large enough, not depending on !, so that:
lim sup
nally yielding (i). }
We prove (iii) of Theorem 1.1 in Section 5, together with Theorem 1.3.
Proof of Lemma 2.2. Setting, for brevity, E(n), we can
in
We obtain
n) +O(
and nally:
exp
3 Proof of Theorem 1.2
We rst provide a useful identity leading to the proof of Theorem 1.2. Set
We have
Theorem 3.1
Y
Proof : The choice of the elements in each of the blocks can be done in
Y
ways, and they can be arranged inside each of these blocks in
Y
ways.
It is more convenient to argue in terms of conned parking schemes, as in Knuth
(1998) or in Flajolet et al. (1998): that is, we can assume the last place to be empty,
since rotations does not change the sizes of blocks. The total number of conned
parking schemes is n m 1 (n m). We obtain a conned parking scheme with sizes k 1 ,
. , for the i rst blocks, respectively, by inserting these i blocks successively,
with an empty place attached to the right of them, insertion taking place at the
front of the conned parking scheme for the remaining cars, or just after one of
the empty places of the conned parking scheme for the remaining cars. There are
possible insertions
for the rst possible insertions for the second block, and so on
. Finally, the probability p(k) on the left hand of Theorem 3.1 is given by
Y
It is not hard to check that this last expression is the same as the right hand of
Theorem 3.1. }
Proof of Theorem 1.2. Set:
Using the same line of proof as in Theorem 2.1, the approximations of Lemma 2.2
for '(n; m; k) and Theorem 3.1 yield the joint density of (X 1
Y
f
Equivalently, the conditional law of X j has the
f
in
Equivalently again, X j is distributed as
in which N j is standard Gaussian and independent of (X 1
We have:
and
so a straightforward induction gives Theorem 1.2. }
4 On the excursions of e
In this Section, we give the proofs of our last two main theorems, Theorem 1.4 and
Theorem 1.5.
4.1 Decomposition of paths of e
These results are simple consequences of a property of decomposition of sample paths
of e that, we believe, has interest in itself: let U 1 be a random variable uniformly
distributed on [0; 1] and independent of e and let D (resp. F ) denote the last zero
of e before U 1 (resp. the rst zero of e after U 1 ), so that Y 1
We have:
Theorem 4.1 We have:
has the same distribution as N 2
, in which N is standard Gaussian ;
(ii) f is a normalized Brownian excursion, independent of Y 1
(iii) Let W be uniformly distributed on [0; 1] and independent of e. Given (f; V )
distributed as T
The introduction of W is a rather unpleasant feature. Actually, the study of h n
suggests that, if '(t) denote the local time of r at 0, on the interval [0; t], point (iii)
should be replaced by
reaches its unique minimum at a point
distributed as T
If we could prove this last point, a second problem would arise: we do not know
uniformly distributed on [0; 1] and independent of (f; Y 1 ()).
However, fV + Wg has surely these properties. On the other hand, for the purpose
of proving Theorem 1.2, the introduction of W is harmless, as the law of the
length of the excursion that straddles U 1 is the same for T
Theorem 4.1 is proven at Subsection 5.4. The starting point of the proof is the
combinatorial identity:
4.2 Proof of Theorem 1.4
It should not be di-cult, following the line of proof of Theorem 4.1, subsection
5.4, to exhibit a
on which there is almost sure convergence ofn (R (1)
for each k, yielding Theorem 1.4.
We prefer to borrow the nice idea of Section 6.4 in Pitman & Yor [26], that uses
the decomposition of sample paths of a Brownian bridge to prove distributional
properties of a sample from a Poisson-Dirichlet distribution.
We introduce, as in [26], a sequence
U being independent of e : with probability 1, U k falls inside some excursion (D
of e ; if this excursion has width B j (), we dene
I
yielding a size-biased permutation of B(), as explained in the introduction. Set:
The random variables U T (k) are independent and uniformly distributed on [0; D] [
[F; 1], and there exist a unique number V k 2]0; 1[ such that
k1 is a sequence of independent random variables, uniform on [0; 1], and
independent of (e; U 1 ).
In view of Theorem 4.1, this leads to
Lemma 4.2 Given that Y 1 the sequence Y distributed
as (1 x)Y (
Actually, but among the (U i ) i2 , only the U T (k) are useful
to determine Y (). Actually Y () is a size-biased permutation of the sequence of
widths of excursions of r: more precisely it is the size-biased permutation built with
the help of the sequence V . As a consequence, it is also the size-biased permutation
of the sequence of widths of excursions of q, built with the help of the sequence
~
k1 . This ends the proof of Lemma 4.2, as ~
V is a sequence of
independent and uniform random variables, independent of (r; W ).
Using Lemma 4.2, we prove, by induction on k, the two following properties
has the distribution asserted in Theorem 1.4 ;
distributed as (1 s k )Y (
The conditional law of is the conditional
law of (1 s k )Y (
Thus, due to
Lemma 4.2, it has the same law as
in
giving point(2) for k + 1. Set:
Due to point(2) for k, and to point (i) of Theorem 4.1, given that (Y j
or equivalently
giving
4.3 Proof of Theorem 1.5
There exists a similar result in a seemingly dierent setting, that is, for the standard
additive coalescent (cf. [4]).
Given that (D; F distributed as a+xU in which U is uniform
on ]0; 1[. On the other hand, it is easy to see that, for t 2 [0; 1],
x
Thus given (D; F distributed as
Since this last distribution does not depend on a, it is also the conditional distribution
of (Y Equivalently, by change of variables,
the conditional distribution of (S( y, is the same as
the unconditional distribution of
1+y
This last statement yields (1.1), by induction on k: assuming that property at
we see that, given that S( 1 y,
Owing to Theorem 4.1,
5 Proof of Theorems 1.3 and 4.1
denote the number of cars that tried to park on place k, successfully or not.
The proof of Theorem 1.3 will be in three steps: in Subsection 5.2, using Theorems
of Doob and Vervaat we shall prove that
Theorem 5.1 If lim n
weakly
in which h is dened, with the help of a uniform random variable U independent
of e, by:
As a rst result, we shall establish in Subsection 5.1 a close coupling between
H k and the empirical processes of mathematical statistics. Theorem 5.1 is a generalization
of a similar Theorem established in Marckert & Chassaing, for the case
In Subsection 5.3 we shall prove the convergence of the widths
of excursions of h n to that of e, using essentially results of Section 2.3 in Aldous
(1997).
Figure
3: Prole.
As only if place k is empty, the width of an excursion of h n turns
out to be the length of some block of cars, normalized by 1=n. We shall call h n the
prole of the parking scheme.
5.1 Connection between parking and empirical processes
Propositions 5.3, 5.4 and 5.5 at the end of this subsection, are the key points for the
convergence of blocks' sizes.
The model of cars parking on places can be described by a sequence (U k ) k1 of
independent uniform random variables, car c k being assumed to park (or to try to
on place p i if U k falls in the interval
. Let Y k denote the number of
cars that tried rst to park on place k. We have:
since either place k is occupied by car c i and, among the H k cars that tried place
k, only car c i won't visit place k place
k is empty and H We understand this equation, when
This induction alone does not give the H k 's, since we do not have any starting
value. We have thus to nd an additional relation, and this is the purpose of
Proposition 5.2, that gives a rst connection between hashing (or parking) and
the empirical process. Let V be dened by
ng
in which m is the empirical process of mathematical statistics (see Shorack &
Wellner [33], Csorgo & Revesz [8] or Pollard [28] for background). We just recall that,
given a sample (U almost all interesting statistics
are functionals of the empirical distribution
and that the empirical process m is dened by:
The process m gives a measure of the accuracy of the approximation of the true
distribution function t by the empirical distribution function Fm (t), and was, as
such, extensively studied in mathematical statistics.
Proposition 5.2 In the hashing table, place V (n) is empty.
Proof of Proposition 5.2. Let:
Since we have:
clearly:
Thus
is nonnegative, in which we understand that Y
. (while S is not a convention).
Obviously some place k is empty if and only if H To end the proof,
we shall assume that H V (n) is positive and we shall deduce, by induction on k, that
so that no place is empty, in contradiction
with m < n. Due to (5.4) for thus, due to (5.3):
so that H V (n) 1 2, and that is the starting point of our induction. Now we assume
that for any i k, H V (n) i 2. From (5.3) we obtain that for any i k,
so that due to (5.4),
Now that we know the true value of H k for some point k, namely (n), we can
use (5.3) to compute each value of H k . Sizes of blocks of cars will follow, as blocks of
cars are in correspondence with blocks of indices k such that H k > 0 (blocks that we
call also later excursions of H k ). We nd the following explicit connection between
empirical processes and H k , in the same spirit as in Marckert & Chassaing [20].
Proposition 5.3 For any k 2 f1; 2; ::: ; n 1g,
that can be rewritten in terms of the empirical process:
Proposition 5.4 For any k 2 fV (n)
Proof of Proposition 5.3. Set:
places in the set fV (n);
and:
Using (5.3) we obtain:
To obtain Proposition 5.3, we only have to prove that R
We already have
There exist a last index j < k such that Z As a consequence,
is the last empty place before including
Now R acording to place V (n) being empty or not. But due
to (5.3) and the fact that V (n) is the last empty place before place
we have H V
Two cases arise: either H V In the rst case we have
simultaneously R In
the second case H V (n)+k 1 entails both R and also W j+1 (=
This ends the tedious proof by induction, but
these facts will prove useful as Z k will be easier to handle than R k , when dealing
with uniform convergence in the next subsection. }
We notice, for further use, that we just proved that
Proposition 5.5 A record of W k or of Z k means that is an empty
place.
5.2 Convergence to e
Let us recall that Donsker (1952), following an idea of Doob, proved that:
Theorem 5.6 Let be a Brownian bridge. We have:
weakly
We shall also need the next Theorem to prove Theorem 5.1:
Theorem 5.7 (Verwaat, 1979 [37]) Let V be the almost surely unique point
such that b(V dened by
normalized Brownian excursion, independent of V .
Proof of Theorem 5.1. According to the Skorohod representation theorem (cf.
Rogers & Williams, (1994) II.86.1) we can assume the existence of a
and on this space a sequence n and a Brownian bridge b, such that, for almost any
converges uniformly on [0; 1] to b(!).
We could
in such a way to build for each m a corresponding
sequence U
independent random vari-
ables, and the corresponding random parking scheme. However this would not really
be needed for the proof, only for the mental picture. Note that such a sequence U (m)
would not necessarily be embedded in U (m+1) .
Each m denes a sequence S m;n
as in the previous
subsection, and denes also the corresponding V (n), as follows:
1kn
Finally, let:
be the corresponding number of cars that tried, successfully or not, place number k,
and set:
z
Assuming lim n
E(n)
Lemma 5.8 For almost any !,
uniformly
Lemma 5.9 For almost any !, V (n; !)=n converges to V (!).
As a consequence, we have
Lemma 5.10 For almost any !,
uniformly
and
z n (t) uniformly
and also:
Lemma 5.11 For almost any !,
uniformly
Taking Theorem 5.1 is a reformulation of Lemma 5.11, since we
e(ft (V (n)=n)g) uniformly
Due to the estimates in the proof of Lemma 5.8, uniform convergence holds also true,
indierently, for continuous and stepwise linear versions of h n , y n , z n or m (bntc=n)
(we shall use this remark in the proof of Theorem 4.1).
Proof of Lemma 5.8. Let M n denote max 0<kn Y k;n , where Y k;n denotes the
number of cars that want to park at place k. We have:
and, as Y k;n is binomially(m; 1=n) distributed:
nE[exp(KY 1;n )] exp( KC log n)
Thus Borel-Cantelli Lemma entails that for a suitable C, with probability 1 the
supremum norm of m (bntc=n) m (t) vanishes as quickly as C log n
Proof of Lemma 5.9. For this proof and the next one, we consider an ! such
that simultaneously m and m (bntc=n) converges uniformly to b, and such that b
reaches its minimum only once (we know that the set of such !'s has measure 1).
We set:
It is then straightforward, >from the continuity property of b, that the rst
minimum of m (bntc=n) (i.e. V (n)=n) converges to the only minimum of b (i.e. V
clearly
Now the minimum of b(t) over the set [0; 1]=[V "; V +"] is b(V )+ for some positive
, and thus, if necessarily jV V (n)=nj < " . }
Proof of Lemma 5.10. Clearly:
Proof of Lemma 5.11.Let
z
According to Proposition 5.3, or to Proposition 5.4, we have:
0st
where
0st
0st
Thus Lemma 5.11 follows from the uniform convergence of z n to z in Lemma 5.10. }
5.3 Proof of Theorem 1.3
The widths of excursions of h n (t) above zero are the sizes of the blocks of cars
of the corresponding parking scheme, normalized by n. Unfortunately, uniform
convergence of h n to h does not entails convergence of sizes of excursions. Using the
line of proof of Aldous (1997, Section 2.3), we shall argue that the excursions of h n
above 0 are also the excursions of z n above its current minimum: now the uniform
convergence of z n to z entails convergence of sizes of excursions of z n above its current
minimum to sizes of excursions of z above its current minimum, provided that z does
never reach its current minimum two times. This last condition is classically satised
for almost each sample path z, so that we have almost sure convergence of sizes of
excursions of z n or equivalently of sizes of blocks. Note that excursions of z above
its current minimum are also excursions of e above 0.
More precisely, we shall apply to z n and z the following weakened form of Lemma
7, p. 824 of [2]:
Lemma 5.12 Suppose f : [0; +1[ ! R is continuous. Let E be the set of
nonempty intervals I = (l; r) such that:
sl
Suppose that, for intervals I 1 , I 2 2 E with l 1 < l 2 we have
Suppose also that the complement of [ I2E (l; r) has Lebesgue measure 0. Let
uniformly on [0; 1]. Suppose (t n;i , i 1)
satisfy the following:
Write m(n)g. Then (n) ! for the vague
topology of measures on [0; 1] (0; 1].
As we are dealing with point processes, such as n or , we have:
Proposition 5.13 (n) ! for the vague topology if and only if, for any
y > 0 such that ([0; 1]
(i) for n large enough, (n) ([0; 1] [y;
(ii) for any x 2 [0; 1] [y; 1] such that (fxg) > 0 there is a sequence of points x n ,
0, such that x n ! x.
As an easy consequence, partly due to the fact that second components add up
to 1:
Corollary 5.14 If (n) ! for the vague topology, then the sequence of second
components of points of (n) , sorted in decreasing order, converge componentwise
and in ' 1 to the corresponding sequence for .
The Lemmata and Propositions of this subsection are hold to be straightforward
by specialists, as well as the stochastic calculus points in the next proof, so we give
their proofs in the annex. If (f the sequence of > second components
of (n) (resp. of ) is nothing else but 1
Proof of Theorem 1.3. Lemma 5.12 and Corollary 5.14, applied to
of Subsection 5.2, entails Theorem 1.3. Hypothesis of Lemma 5.12,
concerning a regular sample path of z(t), that is, almost surely, for any l 1 < l 2 ,
setting r), the Lebesgue measure of O c is
0, are well known to hold true, as e is solution of:
du;
(see [29], Chp. XI, the whole Section 3, and notably Ex. 3.11).
For the t n;i 's in Lemma 5.12, we choose the records of z n (t) so, due to Lemma
5.5, the nt n;i 's are the empty places, counted starting at V (n). Then, not depending
on i, we have z
n, and hypothesis (iii) of Lemma 5.12 is
satised by z n . }
This last proof is also a proof of Theorem 1.1 is the width of the
widest excursion of e.
5.4 Proof of Theorem 4.1
Theorem 4.1 is basically a consequence of the following identity (m n 2):
which is equivalent to relation (2.2).
Let U be uniformly distributed and independent of the Brownian excursion e.
The decomposition of n m according to the length k of the block containing car c m ,
in the identity 5.7, yields the desintegration of e according to the value x of Y 1 ()
asserted in Theorem 4.1: the factor gives the distribution of the shape
of the (suitably scaled) excursion ~
f n of h n corresponding to the block that contains
car c 1 . Marckert & Chassaing (1999) proved that if if each of the
conned parking schemes are equiprobable, then
weakly
we shall deduce that the shape f of the excursion of e that contains U 1 , being the
limiting prole of ~
f n , is independent of its width Y 1 and is distributed
as a normalized Brownian excursion.
If we make a random rotation with angle d(n k 1)W e of the parking scheme of
the m k remaining cars on the n k 1 remaining places, we obtain (n
equiprobable parking schemes, with n m '
leading to the fact that r(fW + :g) is distributed as T
Let us give a formal proof:
Proof of Theorem 4.1. Let C be the space of continuous functions on [0; 1], with
the topology of uniform convergence. The triplet of independent random variables
denes the random variable (X 1 (); f; q) and
its law Q, that is a probability measure on the space [0; 1] C 2 . The normalized
Brownian excursion e
resp. T
denes the probability measure
(resp. x ) on C. Theorem 4.1 can be written equivalently:
Z 1f(; x)
Z
Z
for any bounded uniformly continuous function on the space [0; 1] C 2 . It is
harmless to assume that
In order to prove Theorem 4.1, we shall exhibit a probability
and, on this space, a sequence of [0;
almost surely, in
to (X 1 (); f; q), for the product
topology of [0;
ne;
(iii) the conditional law, k , of f n given that X
, does not depend on n and
satises:
weakly
(iv) the conditional law, n;k , of q n given that X
n;k
weakly
As a consequence of (i):
for any bounded uniformly continuous function . We shall prove now that properties
(ii) to (iv) are su-cient to insure that, for any bounded uniformly continuous
function satisfying
have
Z 1f(; x)
Z
entailing (5.8). We end the proof with the construction of (X
Let M be the bound for jj . Set:
Z 1f(; x)
Z
Z
a
Z
Z
a
Z
Z
(dnxe=n;
Z
Z
'(n; m;
Z
Z
By dominated convergence, owing to (iii) and (iv), lim n B A. By uniform
continuity of f and , lim Finally lim n C n D due to Lemma
2.
Construction of (X
Assume we are in the setting of subsection 5.2, that is we use the Skorohod
Theorem to obtain on some
space
a sequence m that converges almost surely
uniformly to a Brownian bridge We enlarge this space to
in order to obtain two uniform random variables U 1 and W independent
of of b. We set
ne
for sake of brevity. We shall build our sequence (X
will describe, partly, the parking scheme for m(n) cars on n places.
Let us collect some basic facts concerning empirical processes: m has m positive
jumps with height 1
m , at places that we call (V (m)
Between the jumps
m has a negative slope
m. The random vector
uniformly
distributed on the simplex f0 < x 1 < x 2 < ::: < xm < 1g and a random permutation
of its components would yield a sequence (U (m)
independent uniform random variables on [0; 1], car c k trying to park rst on place
l
Unfortunately is not provided with the
space
It means that, given m , we
know the number of cars that that will park on each place k, say (Y m;n
is the number of jumps of m taking place in the interval
n ]), but we do not
know which car parks on which place. This is a slight di-culty, since we have in
mind to choose nX n as the place where car c 1 parks, so that (ii) follows at once
from relation (2.2).
In order to circumvent this problem, note that a random permutation can be
described by a random bijection from f2; 3; 4; :::; mg to f1; 2; 3; :::; m 1g and a
random uniform integer (1), independent of : rst we choose (1), then we choose
::; (m)) at random from the m 1 remaining integers, renumbered >from 1
to m 1. We shall take
that is
U (m)
car c 1 parks at place dnU (m)
1 e. We shall leave undened. In addition to the fact
that (1) and U (m)
are random uniform, we get that, almost surely:
U (m)
Incidentally, the remaining jumps are independent of U (m)
1 and uniformly distributed
on the simplex f0 < x 1 < x 2 < ::: < xm 1 < 1g.
the rst empty place on the left of
dnU (m)
e (resp. the rst empty place on the right), that is, the beginning (resp. the
end) of the block containing car c 1 . Easy considerations on the uniform convergence
of z n to z give that, almost surely,
Recall that R (1)
yielding point (ii) and also a part of point (i):
almost surely,
Let
~
R (n)h n
Uniform convergence of h n to e, uniform continuity of e(t) and (5.10) entails
the uniform convergence of ~
f n to f and the uniform convergence of
~
to As regards (iii), relation (5.7) tells us that, given that nX
~
f n is the prole associated to one of the parking schemes
of k cars on k places. Accordingly, its conditional law ~ k converges weakly to
. Similarly, due to the random rotation d(n R (n)
e, given that nX
is the prole associated to one of the (n k 1) m k equiprobable parking schemes
of m k cars on n k 1 places: its conditional law ~
n;k converges weakly to x
under the hypothesis of (iv).
Unfortunately, ~
n;k (C), so we have a little bit of additional work:
we just replace ~
f n and ~
q n by their corresponding piecewise-linear continuous approx-
imations, f n and q n . Relation (5.6) insures that f n (resp. q n ) converges uniformly
to f (resp. g), yielding point (i). Relation (5.6) insures also that their laws k (resp.
6 Proof of Theorem 1.6
Once again, we use the setting of subsection 5.2, that is we use the Skorohod Theorem
to obtain on some
space
a sequence n that converges almost surely uniformly
to a Brownian bridge We enlarge again the probability
space, so that we have a sequence of independent uniform random variables (U k ) k1 ,
both independent and independent of the sequence ( m does contain the
information about the number of cars that tried to park on each of the n places,
but it contains no information about the chronology. We shall use (U k ) 2kn to
recover the chronology, that is, to pick at random, one jump after the other, the
jumps of give the place of car c 1 , through U (n)
1 dened at relation (5.9),
and (U k ) 2kn will generate a random permutation of the rank of the remaining
order statistics f2; 3; :::; ng. As a consequence, from a given n and (U k ) 2kn , we
can recover the whole history of the process of parking n cars on n places, without
losing the almost sure uniform convergence of n to a Brownian bridge, needed for
the convergence of sizes of blocks.
Thus for each n; k we dene a prole h n;k (t) associated with the parking of the
rst cars on the n places, and we use the corresponding notation z n;k (t). The
prole h n;k (t) is given by Proposition 5.4, based on the empirical process n;k (t)
obtained by erasing the k last random choices of order statistics (places of jumps)
of n .
More precisely, let ~
uniformly distributed on the simplex
denote the remaining jumps of n , once U (n)
1 has
been erased. Let be the random permutation generated from (U k the rank
of U k , once (U k ) 2kn is sorted in increasing order, is (k). We assert that if the
rst try of car c 1 is place dnU (n)
e, and the rst try of car c k (k 2) is place dn ~
e,
the n n parking schemes are equiprobable. The k last random choices ( ~
provide a sample of k independent and uniform random variables, with an associated
empirical process ~
n;k (t). We have:
n;k (t)
The DKW inequality gives
Pr(sup
n;k (t)j x) 4
thus, using Borel-Cantelli lemma, we obtain easily that, for " > 0,
Pr
sup
sup
Accordingly, by a simple glance at the proof of Lemma 5.9, we see that, if V (n;
denotes the rst minimum of
, the convergence of V (n; k)=n to V ,
for
n, is uniform, almost surely. Compared with Lemma 5.9, we just
have to change slightly the denitions of u
sup
By inspection of relation 5.5, we see that the convergence of Z n (;
ne (t)
to Z(; t, uniformly for (; holds true almost
surely, that is
Pr
uniformly
on
Z
owing to 6.12 and to the fact that
sup
ne
ne
0:
R (k)
n;d
ne
Lemma 5.12 yields
Proposition 6.1
Pr
Theorem 1.6 follows at once. }
7 Concluding remarks
The main pending question, in our opinion, is about
does it provide
an alternative construction of the stochastic additive coalescent ? This construction
would then complete the parallel between the additive coalescent and the multiplicative
coalescent, given in the concluding remarks of [4], as the stochastic multiplicative
coalescent was identied by Aldous [2] as the limit of the normalized sequence of
sizes of connected components in the random graph model, and also as the sequence
of widths of excursions of a simple stochastic process.
An easy corrolary of Theorems 4.1, 1.4 and 1.5 is that we have the joint law of
the sequence of shapes and sizes (or widths) of excursions of e, the sizes being
given by Theorem 1.4: an easy induction proves that the shapes are independent,
distributed as e, and independent of the sizes. An argument in the proof of Theorem
then says that each of the clusters of the fragmentation process B() starts anew
a fragmentation process distributed as
, given that the cluster has
size x. It rises the following questions: is this also a property of the standard additive
coalescent ? Is it a charasteristic property of the standard additive coalescent
Which additional properties are eventually needed to characterize the standard
additive coalescent ?
The scaling factor x in
is easily understood, but the time change
is less clear. However it is explained asymptotically by a property of parking schemes:
the time unit for the discrete fragmentation process associated with parking n cars
on n places, is the departure of p
cars. Due to the law of large numbers, during
one time unit, a given block of cars with size xn loses approximately x
x
xn
cars, meaning that, for the internal clock of this block, p
Acknowledgements
The starting point for this paper was the talk of Philippe Flajolet at the meeting
ALEA in February 1998 at Asnelles (concerning his paper with Viola & Poblete),
and a discussion that Philippe and the authors had in a small cafe of the French
Riviera after the SMAI meeting of September 1998. Some discussions of the rst
author with Marc Yor, and also with Uwe Rossler were quite fruitful. We thank
Philippe Laurencot for calling our attention to the works of Aldous, Pitman &
Evans on coalescence models. The papers of Perman, Pitman, & Yor about random
probability measures and Poisson-Dirichlet processes were also of a great help.
--R
The continuum random tree.
critical random graphs and the multiplicative coalescent.
Exchangeability and related topics.
The standard additive coalescent.
The probabilistic method.
A fragmentation process connected with Brownian motion.
On the frequency of numbers containing prime factors of a certain relative magnitude.
On the Analysis of Linear Probing Hashing
Mappings of acyclic and
The birth of the giant compo- nent
The art of computer programming.
Linear Probing and Graphs
Order statistics for jumps of normalised subordinators.
Exchangeable and partially exchangeable random partitions.
Random discrete distributions invariant under size-biased permuta- tion
A polytope related to empirical distribu- tions
The two-parameter Poisson-Dirichlet distribution derived from a stable subordinator
the probable largest search time grows logarithmically with the number of records.
Convergence of stochastic processes.
Ballots and trees.
Ordered cycle lengths in a random permutation.
Empirical processes with applications to statis- tics
On an enumeration problem.
in Mathematical Essays in Honor of Gian-Carlo Rota (B
A relation between Brownian bridge and Brownian excursion
--TR
Linear probing: the probable largest search time grows logarithmically with the number of records
The first cycles in an evolving graph
The art of computer programming, volume 3
--CTR
Philippe Chassaing , Guy Louchard, Reflected Brownian Bridge area conditioned on its local time at the origin, Journal of Algorithms, v.44 n.1, p.29-51, July 2002
Svante Janson, Individual displacements for linear probing hashing with different insertion policies, ACM Transactions on Algorithms (TALG), v.1 n.2, p.177-213, October 2005
Jean Bertoin, Random covering of an interval and a variation of Kingman's coalescent, Random Structures & Algorithms, v.25 n.3, p.277-292, October 2004 | brownian excursion;hashing with linear probing;coalescence;empirical processes;parking |
633709 | Feature selection with neural networks. | We present a neural network based approach for identifying salient features for classification in feedforward neural networks. Our approach involves neural network training with an augmented cross-entropy error function. The augmented error function forces the neural network to keep low derivatives of the transfer functions of neurons when learning a classification task. Such an approach reduces output sensitivity to the input changes. Feature selection is based on the reaction of the cross-validation data set classification error due to the removal of the individual features. We demonstrate the usefulness of the proposed approach on one artificial and three real-world classification problems. We compared the approach with five other feature selection methods, each of which banks on a different concept. The algorithm developed outperformed the other methods by achieving higher classification accuracy on all the problems tested. | Introduction
Learning systems primary source of information is data. For numerical systems
like Neural Networks (NNs), data are usually represented as vectors in a
subspace of R k whose components - or features - may correspond for example to
measurements performed on a physical system or to information gathered from
the observation of a phenomenon. Usually all features are not equally
informative: some of them may be noisy, meaningless, correlated or irrelevant for
the task. Feature selection aims at selecting a subset of the features which is
relevant for a given problem. It is most often an important issue: the amount of
data to gather or process may be reduced, training may be easier, better estimates
will be obtained when using relevant features in the case of small data sets, more
sophisticated processing methods may be used on smaller dimensional spaces
than on the original measure space, performances may increase when non
relevant information do not interfere, etc.
Feature selection has been the subject of intensive researches in statistics and in
application domains like pattern recognition, process identification, time series
modelling or econometrics. It has recently began to be investigated in the machine
learning community which has developed its own methods. Whatever the domain
is, feature selection remains a difficult problem. Most of the time this is a non
monotonous problem, i.e. the best subset of p variables does not always contain
the best subset of q variables (q < p). Also, the best subset of variables depends
on the model which will be further used to process the data - usually, the two
steps are treated sequentially. Most methods for variable selection rely on
heuristics which perform a limited exploration on the whole set of variable
combinations.
In the field of NNs, feature selection has been studied for the last ten years and
classical as well as original methods have been employed. We discuss here the
problem of feature selection specifically for NNs and review original methods
which have been developed in this field. We will certainly not be exhaustive since
the literature in the domain is already important, but the main ideas which have
been proposed are described.
We describe in sections 2 and 3 the basic ingredients of feature selection methods
and the notations. We then briefly present, in section 4, statistical methods used
in regression and classification. They will be used as baseline techniques. We
describe, in section 5, families of methods which have been developed
specifically for neural networks and may be easily implemented either for
regression or classification tasks. Representative methods are then compared on
different test problems in section 6.
. Basic ingredients of feature selection methods.
A feature selection technique typically requires the following ingredients:
. a feature evaluation criterion to compare variable subsets, it will be
used to select one of these subsets,
. a search procedure, to explore a (sub)space of possible variable
combinations,
. a stop criterion or a model selection strategy.
2.1.1 Feature evaluation
Depending on the task (e.g. prediction or classification) and on the model (linear,
logistic, neural networks.), several evaluation criteria, based either on statistical
grounds or heuristics, have been proposed for measuring the importance of a
variable subset. For classification, classical criteria use probabilistic distances or
entropy measures, often replaced in practice by simple interclass distance
measures. For regression, classical candidates are prediction error measures. A
survey of classical statistical methods may be found in (Thomson 1978) for
regression and (McLachlan 1992) for classification.
Some methods rely only on the data for computing relevant variables and do not
take into consideration the model which will then be used for processing these
data after the selection step. They may rely on hypothesis about the data
distribution (parametric methods) or not (non parametric methods). Other
methods take into account simultaneously the model and the data - this is usually
the case for NN variable selection.
2.1.2 Search
In general, since evaluation criteria are non monotonous, comparison of feature
subsets amounts to a combinatorial problem (there are 2 k -1 possible subsets for
k variables), which rapidly becomes computationally unfeasible, even for
moderate input size. Branch and Bound exploration (Narendra and Fukunaga
1977) allows to reduce the search for monotonous criteria, however the
complexity of these procedures is still prohibitive in most cases. Due to these
limitations, most algorithms are based upon heuristic performance measures for
the evaluation and sub-optimal search. Most sub-optimal search methods follow
one of the following sequential search techniques (see e.g. Kittler,
. start with an empty set of variables and add variables to the already
selected variable set (forward methods)
. start with the full set of variables and eliminate variables from the
selected variable set (backward methods)
. start with an empty set and alternate forward and backward steps
(stepwise methods). The Plus l - Take away r algorithm is a
generalisation of the basic stepwise method which alternates l
forward selections and r backward deletions.
2.1.3 Subset selection - Stopping criterion
Let be given a feature subset evaluation criterion and a search procedure. Several
methods examine all the subsets provided by the search (e.g. 2 k -1 for an
exhaustive search or k for a simple backward search) and select the most relevant
according to the evaluation criterion.
When the empirical distribution of the evaluation measure or of related statistics is
known, tests may be performed for the (ir)relevance hypothesis of an input
variable. Classical sequential selection procedures use a stop criterion: they
examine the variables sequentially and stop as soon as a variable is found
irrelevant according to a statistical test. For classical parametric methods,
distribution characteristics (e.g. estimates of the evaluation measure variance) are
easily derived (see sections 4.1 and 4.2). For non parametric or flexible methods
like NNs, these distributions are more difficult to obtain. Confidence intervals
which would allow to perform significance testing might be computed via monte
carlo simulations or bootstrapping. This is extremely prohibitive and of no
practical use except for very particular cases (e.g. Baxt and White 1996).
Hypothesis testing is thus seldom used with these models. Many authors use
instead heuristic stop criteria.
A better methodology, whose complexity is still reasonable in most applications,
is to compute for the successive variable subsets provided by the search
algorithm an estimate of the generalization error (or prediction risk) obtained with
this subset. The selected variables will be those giving the best performances.
The generalization error estimate may be computed using a validation set or
cross-validation or algebraic methods although the latter are not easy to obtain
with non linear models. Note that this strategy involves retraining a NN for each
subset.
3 . Notations
We will denote ( , )
k g the realization of a random variable pair (X,Y)
with probability distribution P. x i will be the i th component of x and x l the l th
pattern in a given data set D of cardinality N. In the following, we will restrict
ourselves to one hidden layer NNs, the number of input and output units will be
denoted respectively by k and g. The transfer function of the network will be
denoted f. Training will be performed here according to a Mean Squared Error
criterion (MSE) although this is not restrictive. We will consider, in the
selection methods for classification and regression tasks.
4 . Model independent Feature Selection
We introduce below some methods which perform the selection and the
classification or regression steps sequentially, i.e. which do not take into account
the classification or regression model during selection. These methods are not
NN oriented and are used here for the experimental comparison with NN specific
selection techniques (section 6). The first two are basic statistical techniques
aimed respectively at regression and classification. These methods are not well
fitted for NNs since the hypothesis they rely on do not correspond to situations
where NNs might be useful. However since most NN specific methods are
heuristics they should be used for a baseline comparison. The third one has been
developed more recently and is a general selection technique which is data
hypothesis free and might be used for any system either for regression or
classification. It is based on a probabilistic dependence measure between two sets
of variables.
4.1 Feature selection for linear regression
We will consider only linear regression, but the approach described below may
be trivially extended for multiple regression. Let x 1 , x 2 , . x k and y be real
variables which will be supposed centered. Let us denote:
the current approximation of y with p selected variables (the x i are renumbered so
that the p first selected variables correspond to numbers 1 to p). The residuals
x are assumed identically and independently distributed.
Let us denote:
l
l =1
For forward selection, the choice of the p th variable is usually based on R p 2 , the
partial correlation coefficient (table 1) between y and regressor f (p) , or on an
adjusted coefficient 1 . This coefficient represents the proportion of y total variance
explained by the regressor f (p) . The p th variable to select is the one for which
f (p) maximizes this coefficient. The importance of a new variable is usually
measured via a Fisher test (Thompson, 1978) which compares the models with
p-1 and p variables (Fs(p) forward in table 1). Selection is stopped if
1 The adjusted coefficient R
is often used instead of R p
.
Fs(p) forward < F(1,N-p,a) the Fisher statistics with (1,N-p) degrees of
freedom for a confidence level of a.
Choice Stop
(y l ) 2N
Backward SSR
l =1
Table
1: Choice and Stop criteria used with statistical forward and backward
methods.
Note that F S could also be used in place of R p 2 as a choice criterion:
forward
When p-1 variables have already been selected, R p-1 2 has a constant value in
[0,1] and maximizing F s is similar to maximizing R p 2 . Equation (4.1.3) selects
variables in the same order as R p 2 does.
For backward elimination the variable eliminated from the remaining p is the less
significant in terms of the Fisher test i.e. it is the one with the smallest value of
SSR p-1 or equivalently of Fs(p) backward (table 1). Selection is stopped if
backward > F(1,N-p,a).
4.2 Feature Selection For Classification
For classification, we shall select the variable subset which allows the best
separation of the data. Variable selection is usually performed by considering a
class separation criterion for the choice criterion and an associated F-test as
stopping criterion. As for regression, forward, backward or stepwise methods
may be used.
Data separation is usually computed through an inter-class distance measure
(Kittler, 1986). The most frequent discriminating measure is the Wilks lambda
(Wilks, 1963) L sV p defined as follows:
where W is the intra-class matrix dispersion corresponding to the selected
variable set SV p , B the corresponding inter-class matrix 2 and |M| the determinant
of matrix M. The determinant of a covariance matrix being a measure of the
volume occupied by the data, |W| measures the mean volume of the different
classes and |W+B| the volume of the whole data set. These quantities are
computed for the selected variables so that a good discriminating power
corresponds to a small value of L sv p : the different classes are represented by
compact clusters and are well separated. This criterion is well suited in the case of
multinormal distributions with equal covariance for each class, it is meaningless
for e.g. multimodal distributions. This is clearly a very restrictive hypothesis.
With this measurement the statistic F s , defined below, has a F(g-1,N-g-p+1,a)
distribution (McLachlan 1992):
We can then use the Wilks lambda both for estimating the discriminating power
of a variable and for stopping the selection in forward, backward (Habbema and
Hermans, 1977) or stepwise methods.
For the comparisons in section 6, we used Stepdisc, a stepwise method based on
(4.2.2) with a 95% confidence level.
4.3 Mutual Information
When data are considered as realization of a random process, probabilistic
information measures may be used in order to compute the relevance of a set of
these two quantities are defined as :
x l -class j
with g the number of classes, n j the number of samples in class j, - j the mean of
class j and - the global mean.
variables with respect to other variables. Mutual information is such a measure
which is defined as:
a,b
where a and b are two variables with probability density P(a) and P(b).
Mutual information is independent from any inversible and differentiable
transformation of the variables. It measures the "uncertainty reduction" on b
when a is known. It is also known as the Kullbak-Leibler distance between the
joint distribution P(a,b) and the marginal distribution product P(a)*P(b).
The method described below does not make use of restrictive assumptions on the
data and is therefore more general and attractive than the ones described in
sections 4.1 and 4.2, especially when these hypothesis do not correspond to the
data processing model, which is usually the case for NNs. It may be used either
for regression or discrimination. On the other hand such non parametric methods
are computationally intensive. The main practical difficulty here is the estimation
of the joint density P(a,b) and of the marginal densities P(a) and P(b). Non
parametric density estimation methods are costly in high dimensions and
necessitate a large amount of data.
The algorithm presented below uses the Shannon entropy (denoted H(.)) to
compute the mutual information It is possible to
use other entropy measures like quadratic or cubic entropies (Kittler, 1986).
Battiti (1994) proposed to use mutual information with a forward selection
algorithm called MIFS (Mutual Information based Feature Selection). P(a,b) is
estimated by Fraser algorithm (Fraser and Swinney, 1986), which recursively
partitions the space using c 2 tests on the data distribution. This algorithm can
only compute the mutual information between two variables. In order to compute
the mutual information between x p and the selected variable set SV p-1 does
not belong to SV p-1 ), Battiti uses simplifying assumptions. Moreover, the
number of variables to select is fixed before the selection. This algorithms uses
forward search and variable x p is the one which maximises the value :
where SV p-1 is the set of p - 1 already selected variables.
Bonnlander and Weigend (1994) use Epanechnikov kernels for density
estimation (H-rdle, 1990) and a Branch&Bound (B&B) algorithm for the search
(Narendra and Fukunaga, 1977). B&B warrants an optimal search if the criterion
used is monotonous and it is less computationally intensive than exhaustive
search. For the search algorithm, one can also consider the suboptimal floating
search techniques proposed by Pudil et al. (1994) which offer a good
compromise between the sequential methods simplicity and the relative
computational cost of the Branch&Bound algorithm.
For the comparisons in section 6, we have used Epanechnikov kernels for
density estimation in (4.3.2), a forward search, and the selection is stopped
when the MI increase falls below a fixed threshold (0.99).
5 . Model dependent feature selection for Neural Networks
Model dependent feature selection attempts to perform simultaneously the
selection and the processing of the data: the feature selection process is part of the
training process and features are sought for optimizing a model selection
criterion. This "global optimization" looks more attractive than model-independent
selection where the adequacy of the two steps is up to the user.
However, since the value of the choice criterion depends on the model
parameters, it might be necessary to train the NN with different sets of variables:
some selection procedures alternate between variable selection and retraining of
the model parameters. This forbids the use of sophisticated search strategies
which would be computationally prohibitive.
Some specificities of NNs should also be taken into consideration when deriving
feature selection algorithms:
. NNs are usually non linear models. Since many parametric model-independent
techniques are based on the hypothesis that input-output variables
dependency is linear or that input variables redundancy is well measured by linear
correlation between these variables, such methods are clearly ill fitted for NNs.
. The search space has usually many local minima, and relevance
measures will depend on the minimum the NN will have converged to. These
measures should be averaged over several runs. For most applications this is
prohibitive and has not been considered here.
. Except for (White 1989) who derives results on the weight distribution
there is no work in the NN community which might be used for hypothesis
testing.
For NN feature selection algorithms, choice criteria are mainly based on heuristic
individual feature evaluation functions. Several of them have been proposed in
the literature, we have made an attempt to classify them according to their
similarity. We will distinguish between:
. zero order methods which use only the network parameter values.
. first order methods which use the first derivatives of network parameters.
. second order methods which use second derivatives of network
parameters.
Most feature evaluation criteria only allow to rank variables at a given time, the
value of the criterion by itself being non informative. However, we will see that
most of these methods work reasonably well.
Feature selection methods with neural networks use mostly backward search
although some forward methods have also been proposed (Moody 1994, Goutte
1997). Several methods use individual evaluation of the features for ranking them
and do not take into consideration their dependencies or their correlations. This
may be problematic for selecting minimal relevant sets of variables. Using the
correlation as a simple dependence measure is not enough since NNs capture non
linear relationships between variables, on the other hand, measuring non linear
dependencies is not trivial. While some authors simply ignore this problem,
others propose to select only one variable at a time and to retrain the network with
the new selected set before evaluating the relevance of remaining variables. This
allows to take into account some of the dependencies the network has discovered
among the variables.
More critical is the difficulty for defining a sound stop criterion or model choice.
Many methods use very crude techniques for stopping the selection, e.g. a
threshold on the choice criterion value, some rank the different subsets using an
estimation of the generalization error. The latter is the expected error performed
on future data and is defined as:
where in our case, r(x,y) is the euclidean error between desired and computed
outputs. Estimates can be computed using a validation set, cross-validation or
algebraic approximations of this risk like the Final Prediction
1970). Several estimates have been proposed in the statistical (Gustafson and
Hajlmarsson 1995) and NN (Moody 1991, Larsen and Hansen 1994) literature.
For the comparison in section 6, we have used a simple threshold when the
authors gave no indication for the stop criterion and a validation set
approximation of the risk otherwise.
5.1 Zero Order Methods
For linear regression models, the partial correlation coefficient can be expressed
as a simple function of the weights. Although this is not sound for non linear
models, there have been some attempts for using the input weight values in the
computation of variable relevance. This has been observed to be an inefficient
heuristic: weights cannot be easily interpreted in these models.
A more sophisticated heuristic has been proposed by Yacoub and Bennani
(1997), it exploits both the weight value and the network structure of a multilayer
perceptron. They derived the following criterion:
I
O
where I, H, O denote respectively the input, hidden and output layer.
For a better understanding of this measure, let us suppose that each hidden and
output unit incoming weight vector has a unitary L 1 norm, the above equation
can be written as:
In (5.1.2), the inner term is the product of the weights from input i to hidden unit
j and from j to output o. The importance of variable i for output o is the sum of
the absolute values of these products over all the paths -in the NN- from unit i to
unit o. The importance of variable i is then defined as the sum of these values
over all the outputs. Denominators in (5.1.1) operate as normalizing factors, this
is important when using squashing functions, since these functions limit the
effect of weight magnitude. Note that this measure will depend on the magnitude
of the input, the different variables should then be in a similar range. The two
weight layers do have different role in a MLP which is not reflected in (5.1.1),
for example, if the outputs are linear, the normalization should be suppressed in
the inner summation of (5.1.1).
They used a backward search and the NN is retrained after each variable deletion,
the stop criterion is based on the evolution of the performances on a validation
set, elimination is stopped as soon as performances decrease.
5.2 First Order Methods
Several methods propose to evaluate the relevance of a variable by the derivative
of the error or of the output with respect to this variable. These evaluation criteria
are easy to compute, most of them lead to very similar results. These derivatives
measure the local change in the outputs wrt a given input, the other inputs being
fixed. Since these derivatives are not constant like in linear models, they must be
averaged over the training set. For these measures to be fully meaningful, inputs
should be independent and since these measures average local sensitivity values,
the training set should be representative of the input space.
5.2.1 Saliency Based Pruning (SBP)
This backward method (Moody and Utans 1992) uses as evaluation criterion the
variation of the learning error when a variable x i is replaced by its empirical mean
here since variables are assumed centered):
where MSE x
l
l
l l
l
This is a direct measure of the usefulness of the variable for computing the
output. For large values of N, computing S i is costly, and a linear approximation
may be used:
f y
x
l l
l
l
l
x
Variables are eliminated in the increasing order of S i .
For each feature set, a NN is trained and an estimate of the generalization error - a
generalization of the Final Prediction Error criterion - is computed. The model
with minimum generalization error is selected.
Changes in MSE is not ambiguous only when inputs are not correlated. Variable
relevance being computed once here, this method does not take into account
possible correlations between variables. Relevance could be computed from the
successive NNs in the sequence at a computational extra-cost (O(k 2
computations instead of O(k) in the present method).
5.2.2 Methods using computation of output derivatives
For a linear model the output derivative wrt any input is a constant, which is not
the case for non linear NNs. Several authors have proposed to measure the
sensitivity of the network transfer function with respect to input x i by computing
the mean value of outputs derivative with respect to x i over the whole training set.
In the case of multilayer perceptrons, this derivative can be computed
progressively during learning (Hashem, 1992). Since these derivatives may take
both positive and negative values, they may compensate and produce an average
near zero. Most measures use average squared or absolute derivatives. Tenth of
measures based on derivatives have been proposed, and many others could be
defined, we thus give below only a representative sample of these measures.
The sum of the derivative absolute values has been used e.g. in Ruck et al.
l =1
For classification Priddy et al. (1993) remark that since the error for decision j
x may be estimated by 1 - f j (x) , (5.2.3) may be interpreted as the
absolute value of the error probability derivative averaged over all decisions
(outputs) and data.
Squared derivatives may be used instead of the absolute values, Refenes et al.
(1996) for example proposed for regression a normalized sum:
x
f y
f
x
l
l
x
x
where var holds for variance. They also proposed a series of related criteria,
among
- a normalized standard deviation of the derivatives:
f
x
f
x
f
x
l
l
l
l
x
- a weighted average of the derivatives absolute values where the weights
reflect the relative magnitude of x and f(x):
f
x
x
f
l i
l
l
x
x
All these measures being very sensitive to the input space representativeness of
the sample set, several authors have proposed to use a subset of the sample in
order to increase the significance of their relevance measure.
In order to obtain robust methods, "non-pathological" training examples should
be discarded. For regression and radial basis function networks, Dorizzi et al.
(1996) propose to use the 95% percentile of the derivative absolute value:
-f
Aberrant points being eliminated, this contributes to the robustness of the
measure. Note that the same idea could be used with other relevance measures
proposed in this paper.
Following the same line, Czernichow (1996) proposed a heuristic criterion for
regression, estimated on a set of non pathological examples whose cardinality is
N'. The proposed choice criterion is:
-f
-l
-f
-l
For classification, Rossi (1996), following a proposition made by Priddy et al.
(1993), considers only the patterns which are near the class frontiers. He
proposes the following relevance measure:
f
x
f
x
l
frontier
l
x
x
x
The frontier is defined as the set of point for which - ( ) >
x
l
f x e where e is a
fixed threshold. Several authors have also considered relative contribution of
partial derivatives to the gradient as in (5.2.9).
All these methods use a simple backward search.
For the stopping criteria, all these authors use heuristic rules, except for Refenes
et al. (1996) who define statistical tests for their relevance measures. For non
linear NNs, this necessitates an estimation of the relevance measure distribution,
which is very costly and in our opinion usually prohibits this approach, even if it
looks attractive.
5.2.3 Links between these methods
All these methods use simple relevance measures which depend upon the gradient
of network outputs with respect to input variables. It is difficult to rank the
different criteria, all that can be said is that it is wise to use some reasonable rules
like discarding aberrant points for robustness, or retraining the NN after
discarding each variable and computing new relevance measures for each NN in
the sequence, in order to take into account dependencies between variables. In
practice, all these methods give very similar results as will be shown in section 6.
We summarize below in table 2 the main characteristics of relevance measures for
the different methods.
Derivative
used
Task
C/R
Data
used
(Moody (5.2.1)) -f
C/R All
(Refenes (5.2.5)) -f
C/R All
(Dorizzi (5.2.7)) -f
C/R Non pathological
data
(Refenes (5.2.6)) -f
C/R All
(Czernichow (5.2.8)) -f
-C/R Non pathological
data
(Refenes (5.2.4)) -f
(Ruck
(Rossi
C Frontier between
classes
Table
2. Computation of the relevance of a variable by different methods using
the derivative of the network function. C /R denote respectively Classification
and Regression tasks.
5.3 Second Order Methods
Several methods propose to evaluate the relevance of a variable by computing
weight pruning criteria for the set of weights of each input node. We present
below three methods. The first one is a Bayesian approach for computing the
weight variance. The other two use the hessian of the cost function for
computing the cost function dependence upon input unit weights.
5.3.1 Automatic Relevance Determination (ARD)
This method was proposed by MacKay (1994) in the framework of Bayesian
learning. In this approach, weight are considered as random variables and
regularization terms taking into account each input are included into the cost
function. Assuming that the prior probability distribution of the group of weights
for the i th input is gaussian, the input posterior variance s i 2 is estimated (with
the help of the hessian matrix).
ARD has been successful for time serie prediction, learning with regularization
terms improved the prediction performances. However ARD has not really been
used as a feature selection method since variables were not pruned during
training.
5.3.2 Optimal Cell Damage
Several neural selection methods have been inspired by weight pruning
techniques. For the latter, the decision of pruning a weight is made according to a
relevance criterion often named the weight saliency, the weight being pruned if
its saliency is low. Similarly, the saliency for an input cell is usually defined as
the sum of its weights saliencies.
where fan-out(i) is the set of weights of input i.
Optimal Cell Damage (OCD) has been proposed by Cibas et al. (1994a, 1996)
similar method has also been proposed by Mao et al., 1994). This feature
selection method is inspired from the Optimal Brain Damage (OBD) weight
pruning technique developed by LeCun (1990). In OBD, the connection saliency
is defined by :
which is an order two Taylor expansion of MSE variation around a local
minimum. The Hessian matrix H can be easily computed using gradient descent
but this may be computationally intensive for large networks. For OBD, the
authors use a diagonal approximation for the hessian which can then be computed
in O(N). The saliency of an input variable is defined accordingly as:
Cibas et al. (1994) proposed to use (5.3.5) as a choice criterion for eliminating
variables. The NN is trained so as to reach a local minimum, variables whose
saliency is below a given threshold are eliminated. The threshold value is fixed
by cross validation. This process is then repeated until no variable is found below
the threshold.
This method has been tested on several problems and gave satisfying results.
Once again, the difficulty lies in selecting an adequate threshold. Furthermore,
since several variables can be eliminated simultaneously whereas only individual
variable pertinence measures are used, significant sets of dependent variables
may be eliminated.
For stopping, the generalization performances of the NN sequence are estimated
via a validation set and the variable set corresponding to the NN with the best
performances is chosen.
The hessian diagonal approximation has been questioned by several authors,
Hassibi and Stork (1993), for example, proposed a weight pruning algorithm,
Optimal Brain Surgeon (OBS) which is similar to OBD, but uses the whole
hessian for computing weight saliencies. Stahlberger and Riedmiller (1997)
proposed a feature selection method similar to OCD except that it takes into
account non diagonal terms in the hessian.
For all these methods, saliency is computed using for performance measure the
error variation on the training set. Weight estimation and model selection both use
the same data set, which is not optimal. Pedersen et al. (1996) propose two
weight pruning methods gOBD and gOBS that compute weight saliency
according to an estimate of the generalization error: the Final Prediction Error
(Akaike 1970). Similarly to OBD and OBS, these methods could be also
transformed into feature selection methods.
5.3.3 Early Cell Damage (ECD)
Using a second order Taylor expansion, as in the OBD family of methods, is
justified only when a local minimum is reached and the cost is locally quadratic in
this minimum. Both hypothesis are barely met in practice. Tresp et al. (1997)
propose two weight pruning techniques from the same family, coined EBD
(Early Brain Damage) and EBS (Early Brain Surgeon). They use a heuristic
justification to take into account early stopping by adding a new term in the
saliency computation.
These methods can be extended for feature ranking, we will call ECD (Early Cell
Damage) the EBD extension. For ECD, the saliency of input i is defined as:
The algorithm we propose is slightly different from OCD: only one variable is
eliminated at a time, and the NN is retrained after each deletion.
For choosing the "best" set of variables, we have used a variation of the
"selection according to an estimate of the generalization error" method. This
estimate is computed using a validation set. Since the performances may oscillate
and be not significantly different, several subsets may have the same
performances (see e.g. figure 1). Using a Fisher test we compare any model
performances with those of the best model, we then select the set of networks
whose performances are similar to the best ones and choose among these
networks the one with the smallest number of input variables.
6 . Experimental comparison
We now present comparative performances of different feature selection
methods. Comparing these methods is a difficult task: there is not a unique
measure which characterizes the importance of each input, the selection accuracy
also depends on the search technique and on the variable subset choice criterion.
In the case of NNs, these different steps rely on heuristics which could be
exchanged from one method to the other. The NNs used are multilayer
perceptrons with one hidden layer of 10 neurons.
The comparison we provide here is not intended for a definite ranking of the
different methods but for illustrating the general behavior of some of the methods
which have been described before. We have used two synthetic classification
problems which illustrate different difficulties of variable selection. In the first
one the frontiers are "nearly" linear and there are dependent variables as well as
pure noise variables. The second problem has non linear frontiers and variables
can be chosen independent or correlated.
The first problem has been originally proposed by Breiman et al. (1984). It is a
three class waveforms classification problem with 19 noisy dependent features.
We have also used a variation of this problem where 21 pure noise variables are
added to the 19 initial variables (there are 40 inputs for this variant). The training
set has 300 patterns and the test set 4300. A description of this problem is
provided in the appendix. The performances of the optimal Bayes classifier
estimated on this test set are 86% correct classification. A performance
comparison appears in tables 3 and 4 for these two instances.
Method p * Selected Variables Perf .
Stepdisc (4.2.2) 14 000110111111111011100 0000000000000000000
(Bonnlander (4.3.2)) 12 000011101111111110000 0000000000000000000
(Moody
(Ruck (5.2.3))
(Dorizzi (5.2.7))
(Czernichow (5.2.8)) 17 010111111111111111100 0000000000000000000
(Cibas (5.3.5)) 9 000001111110111000000 0000000000000000000
82.26 %
(Leray (5.3.6)) 11 000001111111111100000 0000000000000000000
Table
3. Performance comparison of different variable selection on the noisy
wave problem.
For the noisy problem, all methods do eliminate pure noise variables. Except for
the two methods at the bottom of table 3 which give slightly lower performances
and select fewer variables , all give similar values around 85% correct. Stepdisc
also gives good performances since in this problem data have a unimodal
distribution and the frontiers are nearly linear. For the non noisy problem,
performances and methods ordering change. The two techniques at the bottom of
table 4 give now slightly better performances.
Method p * Selected Variables Perf .
None 21 111111111111111111111
Stepdisc (4.2.2) 14 001110101111111011100
(Bonnlander (4.3.2)) 8 000001100111101010000
(Moody (5.2.1))
(Ruck (5.2.3))
(Dorizzi
(Czernichow
(Cibas (5.3.5)) 15 001011111111111110100
(Leray
Table
4. Performance comparison of different variable selection methods on the
original wave problem.
Number of remaining variables
ECD
OCD
Figure
1. Performance comparison of two variable selection methods (OCD and
ECD) according to the number of remaining variables for the noisy wave
problem.
Figure
shows performance curves for two methods, OCD and ECD, estimated
on a validation set. Since we have used a single validation set, there are small
fluctuations in the performances. Some form of cross validation should be used
in order to get better estimates, the test strategy proposed for ECD looks also
attractive in this case. It can be seen that for this problem, performances are more
or less similar during the backward elimination (they slightly rise) until they
quickly drop when relevant variables are removed.842060100
None
Yacoub
Moody
Cibas
Leray
Ruck
Stepdisc
Bonnlander
Czernichow
Figure
2. Performance comparison of different variable selection methods vs.
percentage of selected variables on the original wave problem. x axis: percentage
of variables selected, y axis: percentage of correct classification.
Figure
2 gives the repartition of different variable selection methods for the
original wave problem according to their performances (y axis) and the
percentage of selected variables (x axis). The best methods are those with the best
performances and the lower number of variables. In this problem, "Leray" is
satisfying (see figure 2). "Yacoub" does not delete enough variables while
"Bonnlander" deletes too much variables.
The second problem is a two class problem in a 20 dimensional space. The
classes are distributed according two gaussians with respectively - 1 =(0,.,0),
(a is chosen so that ||- 1 - 2 In this
problem, variable relevance is ordered according to their index: x 1 is useless,
x i+1 is more relevant than x i .
Method p * Selected Variables Perf .
Stepdisc (4.2.2) 17 10001111111111111111
(Bonnlander (4.3.2)) 5 00010000000000011011
90.60 %
94.86 %
(Moody (5.2.1)) 9 01000100011000110111
92.94 %
(Ruck
94.86 %
(Dorizzi (5.2.7)) 11 00000000101111111111
(Czernichow (5.2.8)) 9 00000000001101111111
(Cibas
(Leray (5.3.6)) 15 01011011101110111111
Table
5. Performance comparison of different variable selection methods on the
two gaussian problem with uncorrelated variables.9193954080%
None
Yacoub Stepdisc
Leray
Cibas Dorizzi
Ruck
Czernichow
Moody
Bonnlander
Figure
3. Performance comparison of different variable selection methods vs.
percentage of selected variables on the two gaussian problem with uncorrelated
variables. x axis: percentage of variables selected, y axis: percentage of correct
classification.
Table
5 shows that Stepdisc is not adapted for this non linear frontier: it is the
only method that selects x 1 which is useless for this problem. We can remark on
figure 3 that Bonnlander's method deletes too many variables whereas Yacoub's
stop criterion is too rough and does not delete enough variables.
In an other experiment, we replaced the I matrix in S 1 and S 2 by a block
diagonal matrix. Each block is 5x5 so that there are four groups of five
successive correlated variables in the new problem.
Method p * Selected Variables Perf .
90.58 %
Stepdisc (4.2.2) 11 00001101011010110111
(Bonnlander (4.3.2)) 5 00001001010000100001
(Ruck
91.06 %
(Leray
90.72 %
Table
6. Performance comparison of different variable selection methods on the
two gaussian problem with correlated variables.
Table
6 gives the results of some representative methods for this problem:
. Stepdisc still gives a model with good performances but selects
many correlated variables,
. Bonnlander's method selects only 5 variables and gives
significantly lower results,
. Ruck's method obtains good performances but selects some
correlated variables,
. Leray's method, thanks to the retraining after each variable deletion,
find models with good performances and few variables (7 compared
to 10 and 11 for Ruck and Stepdisc).
7 . Conclusion
We have reviewed variable selection methods developed in the field of Neural
Networks. The main difficulty here is that NNs are non linear systems which do
not make use of explicit parametric hypothesis. As a consequence, selection
methods rely heavily on heuristics for the three steps of variable selection :
relevance criterion, search procedure - NN variable selection use mainly
backward search - and choice of the final model. We first discussed the main
difficulties for developing each of these steps. We then introduced different
families of methods and discussed their strengths and weaknesses. We believe
that a variable selection method must remain computationally feasible for being
useful, and we have not considered techniques which rely on computer intensive
methods like e.g. bootstrap at each step of the selection . Instead, we have
proposed a series of rules which could be used in order to enhance several of the
methods which have been described, at a reasonable extra computational cost,
e.g. retraining each NN in the sequence and computing the relevance for each of
these NN allows to take into account some correlations between variables, simple
estimates of the generalization error may be used for the evaluation of a variable
subset, simple tests on these estimates, allow to choose minimal variable sets
(section 5.3.3). Finally we performed a comparison of representative NN
selection techniques on synthetic problems.
--R
Statistical Predictor Identification
Using Mutual Information for Selecting Features in Supervised Neural Net Learning
Bootstrapping confidence intervals for clinical input variable effects in a network trained to identify the presence of acute myocardial infraction
Selecting Input Variables Using Mutual Information and Nonparametric Density Evaluation
Classification and Regression Trees.
Fogelman Souli-
Fogelman Souli-
Architecture Selection through Statistical Sensitivity Analysis.
Independent Coordinates for Strange Attractors from Mutual Information
Extracting the Relevant Decays in Time Series Modelling
Gustafson and Hajlmarsson
Selection of Variables in Discriminant Analysis by F-statistic and Error Rate
Applied Nonparametric Regression.
Econometric Society Monograph n.
Sensitivity Analysis for Feedforward Artificial Neural Networks with Differentiable Activation Functions.
Second Order Derivatives for Network Pruning
Feature Selection and Extraction
Generalized performances of regularized neural networks models.
Bayesian Non-linear Modelling for the Energy Prediction Competition
Discriminant Analysis and Statistical Pattern Recognition
Note on generlization
Principled Architecture Selection for Neural Networks: Application to Corporate Bond Rating Prediction.
Prediction Risk and Architecture Selection for Neural Networks in From Statistics to Neural Networks - Theory and Pattern Recognition Applications
A Branch and Bound Algorithm for Feature Subset Selection.
Floating search methods in feature selection.
Pattern Recognition Letters
Attribute Suppression with Multi-Layer Perceptron
Fast Network Pruning and Feature Extraction Using the Unit-OBS Algorithm
Selection of Variables in Multiple Regression.
Learning in Artificial Neural Networks
Mathematical Statistics
HVS: A Heuristic for Variable Selection in Multilayer Artificial Neural Network Classifier.
--TR
Feature selection for automatic classification of non-Gaussian data
Image enhancement and thresholding by optimization of fuzzy compactness
Introduction to statistical pattern recognition (2nd ed.)
Floating search methods in feature selection
Merging back-propagation and Hebbian learning rules for robust classifications
Feature Selection
Neural Networks for Pattern Recognition
--CTR
E. Gasca , J. S. Snchez , R. Alonso, Rapid and brief communication: Eliminating redundancy and irrelevance using a new MLP-based feature selection method, Pattern Recognition, v.39 n.2, p.313-315, February, 2006
Rajen B. Bhatt , M. Gopal, On fuzzy-rough sets approach to feature selection, Pattern Recognition Letters, v.26 n.7, p.965-975, 15 May 2005
M. Bacauskiene , A. Verikas, Selecting salient features for classification based on neural network committees, Pattern Recognition Letters, v.25 n.16, p.1879-1891, December 2004
Chao-Ton Su , Long-Sheng Chen , Tai-Lin Chiang, A neural network based information granulation approach to shorten the cellular phone test process, Computers in Industry, v.57 n.5, p.412-423, June 2006
D. Franois , F. Rossi , V. Wertz , M. Verleysen, Resampling methods for parameter-free and robust feature selection with mutual information, Neurocomputing, v.70 n.7-9, p.1276-1288, March, 2007
R. E. Abdel-Aal, GMDH-based feature ranking and selection for improved classification of medical data, Journal of Biomedical Informatics, v.38 n.6, p.456-468, December 2005 | regularization;classification;feature selection;neural network |
63382 | Stochastic Petri Net Analysis of a Replicated File System. | The authors present a stochastic Petri net model of a replicated file system in a distributed environment where replicated files reside on different hosts and a voting algorithm is used to maintain consistency. Witnesses, which simply record the status of the file but contain no data, can be used in addition to or in place of files to reduce overhead. A model sufficiently detailed to include file status (current or out-of-date) as well as failure and repair of hosts where copies or witnesses reside, is presented. The number of copies and witnesses is not fixed, but is a parameter of the model. Two different majority protocols are examined. | Introduction
Users of distributed systems often replicate important files on different hosts, to protect them from
a subset of host failures. The consistency of the files across the database is difficult to maintain
manually, and so a data abstraction, the replicated file, has been introduced to automate the update
procedure [Ellis83, Popek81, Stonebraker79]. The consistency of the files is most often maintained
by assigning votes to each copy of the file and by automatically assembling a quorum (majority)
of votes for a file access. Requiring a majority for each update insures that at most one write set
can exist at any time, and that a quorum automatically includes at least one of the most recently
updated copies. Such a system tolerates host failures to the extent that a minority of votes may
be unavailable at any time, but a file update will still be permitted.
P-aris suggests replacing some copies with witnesses, which contain no data but which can
testify to the current state of the copies. Witnesses have low storage costs and are simple to update
[Paris86a, Paris86]. In [Paris86] it is believed that the replacement of some copies with witnesses
has a minor impact on the availability of the file system.
When an availability analysis of such a replicated file system is performed, a Markov chain is
often used [Jajodia87, Paris86a, Paris86]. If the Markov chain is constructed manually, the analysis
is often limited to small systems (perhaps three copies) modeled at the highest level. Without the
proper tools, it is difficult to develop a Markov model of a large system detailed enough to include
both failure and repair of hosts, and voting and updating procedures. A higher level "language"
is needed for the description of the system to allow the automatic generation of the Markov chain.
The stochastic Petri net model provides such a language. We present a stochastic Petri net model
for a distributed file system, whose structure is independent of the number of copies and witnesses.
This model is automatically converted into a Markov chain and solved numerically for the state
probabilities (the generated Markov chains have up to 1746 states).
After describing the replicated file system being considered and defining the stochastic Petri
net model, we incrementally develop a model that includes copies and witnesses, failure and repair,
requests and voting. We then solve the model for the availability of the filesystem under static and
adaptive voting algorithms. Then we examine the performance/reliability tradeoffs associated with
preferring copies vs. preferring witnesses to participate in a quorum, when more than a majority
are available.
Replicated File System
Mutual consistency of the various representatives of a replicated file can be maintained in at
least three ways [Ellis83]: writing to all copies at each update, designating a 'primary' copy as
leader [Parker83, Stonebraker79], or using various weighted voting algorithms [Davcev85, Ellis83,
Gifford79, Jajodia87, Paris86a, Paris86, Thomas79]. In a weighted voting algorithm, each copy is
assigned a number of votes; a set of copies representing a majority of votes (a quorum) must be
assembled for each update.
Copies may be assigned different weights (number of votes), including none, and different quorums
can be defined for read and write operations. Consistency is guaranteed as long as the
quorums are high enough to disallow simultaneous read and write operations on disjoint sets of
copies. The simplest quorum policies require a majority of copies to participate in any read or
write, by assigning one vote to each. However, the system administrator can alter the performance
by manipulating the voting structure of a replicated file [Gifford79].
Associated with each representative is a timestamp, or version number, which is increased each
on each update. At any time, the copies representing a majority of votes have the same (highest)
version number and the assemblage of a majority is guaranteed to contain a representative with the
highest version number. Copies with the same version number are identical. Each time a quorum
of copies is gathered, any out-of-date copy participating in the quorum is brought up-to-date before
the transaction is processed. Copies not participating in the quorum become obsolete, since they
will no longer have the highest version number.
P-aris suggests replacing some of the copies by witnesses to decrease the overhead of maintaining
multiple copies of a file. Witnesses contain no data, but can vote to confirm the current state of
copies. They can be created and maintained easily; they simply contain version numbers reflecting
the most recent write observed. A quorum can be gathered counting witnesses as copies, but at
least one current copy must be included. If there is no copy with the same version number as
the highest witness' version number, then an update cannot be performed. P-aris further suggests
transforming witnesses into copies and vice versa, as needed.
Assumptions
In the model developed, we assume for simplicity that each representative (copy or witness) is
assigned one vote. This vote assignment is called the uniform assignment in [Barbara87], where it
is shown to optimize reliability for fully connected, homogeneous systems with perfect links. We
also assume that copies and witnesses are not transformed into the other. Further, we assume the
following scenario for the gathering of a quorum. When a request is received, a status request
message is sent to each site known to contain a representative. Each representative residing on a
functioning host computer answers, communicating its status. If more than a majority of representatives
are available, a subset is chosen to participate in the update. The selection process of
the members in the participating subset tries to minimize the time needed to service the request
by including as many current representatives as possible, preferably witnesses since they are fast
to update. A quorum is thus formed by first choosing a current copy, then, as needed, current
witnesses, more current copies, outdated witnesses, and outdated copies, in that order. All current
copies and witnesses participating in the quorum are updated, while the remaining ones become
outdated, since they will not have the most recent timestamp.
The host computers can fail and be repaired, thus not all representatives are available at all
times. A representative on a machine that has failed is assumed to be out-of-date when the machine
is subsequently repaired. This is a conservative, but not unrealistic, assumption. It is possible to
exactly model whether a representative has gone out-of-date while the host was down, but we chose
not to do so because the model would be more complex. If a quorum cannot be formed when a
request is received, a manual reconstruction is initiated, to restore the system to its original state.
Again, to keep the model from becoming too complex, we only consider write requests and we
assume that there is a time lapse between requests. We also assume that hosts do not fail while a
quorum is being gathered. Hosts may fail during the update procedure. All times between events
are assumed exponentially distributed, to allow a Markov chain analysis. If the times have a general
distribution, semi-Markov analysis or simulation would be needed [Bechta84a].
4 Stochastic Petri Net
A Petri net is a graphical model useful for modeling systems exhibiting concurrent, asynchronous
or nondeterministic behavior [Peterson81]. The nodes of a Petri net are places (drawn as circles),
representing conditions, and transitions (drawn as bars), representing events. Tokens (drawn as
small filled circles) are moved from place to place when the transitions fire, and are used to denote
the conditions holding at any given time. As an event is usually enabled by a combination of
conditions, a transition is enabled by a combination of tokens in places. An arc is drawn from a
place to a transition or from a transition to a place. Arcs are used to signify which combination
of conditions must hold for the event to occur and which combination of conditions holds after the
event occurs. If there is an arc from place p to transition t, p is an input place for t, if there is an
arc from t to p, p is an output place for t. (These conditions are not exclusive.)
A transition is enabled if each input place contains at least one token; an enabled transition
fires by removing a token from each input place and depositing a token in each output place.
Stochastic Petri nets (SPN) were defined by associating an exponentially distributed firing time
with each transition [Molloy81, Natkin80]. A SPN can be analyzed by considering all possible
markings (enumerations of the tokens in each place) and solving the resulting reachability graph
as a Markov chain. Generalized stochastic Petri nets (GSPN) allow immediate (zero firing time)
and timed (exponentially distributed firing time) transitions; immediate transitions are drawn as
thin bars, timed transitions are drawn as thick bars. GSPN are solved as Markov chains as well
[Ajmone84]. Extended stochastic Petri nets (ESPN) [Bechta84a, Bechta85] allow transition times
to be generally distributed. In some cases an ESPN can be solved as a Markov chain or as a semi-Markov
process, otherwise it can be simulated. The model used in this paper is basically the GSPN
with variations from the original (or the current) definition, so we will use the term stochastic Petri
net (SPN) with the generic meaning of "Petri net with stochastic timing".
Three additional features to control the enabling of transition are included in our SPN model:
inhibitor arcs, transition priorities, and enabling functions. An inhibitor arc [Peterson81] from a
place to a transition disables the transition if the corresponding input place is not empty. If several
transitions with different priorities are simultaneously enabled in a marking, only the ones with
the highest priority are chosen to fire, while the others are disabled. An enabling function is a
logical function defined on the marking, if it evaluates to false, it disables the transition. More
specifically, a transition t is enabled if and only if (1) there is at least one token in each of its input
places, (2) there is no token in any of its inhibiting places, (3) its enabling funtion evaluates to
true, and (4) no other transition u with priority over t and satisfying (1), (2), and (3) exists. In the
description, each transition is labeled with a tuple ("name","priority","enabling function"), where
the enabling function is identically equal to true if absent. The priority is indicated by an
a lower number indicates a higher priority.
If several enabled transitions are scheduled to fire at the same instant, a probability distribution
(possibly marking-dependent) is defined across them to determine which one(s) will fire. In the
SPN we present, only immediate transitions require the specification of these probabilities, since
the probability of contemporary firing for continuous (exponential) distributions is null.
5 Model of Quorum
Consider the SPN shown in figure 1 (part of a larger one considered later). At a certain moment in
time, the SPN contains a token in the place labeled START, and a number of tokens in the places
labeled CC (current copies), CW (current witnesses), OW (outdated witnesses), and OC (outdated
copies), representing the number of representatives in the corresponding state at the beginning of
the observation. The remaining places are empty. Assume that it is possible to reach a quorum,
that is, there is a token in the CC place and that a minority of the hosts are down.
Transition T16 is enabled and can fire after an exponentially distributed amount of time signifying
the time needed to send status request messages and receive responses. After T16 fires, the
assembly of the quorum begins. It is likely that more than a majority of representatives respond; if
so, a subset of them must be selected to participate in the update. As mentioned earlier, we would
like to minimize the time needed to perform the update, so we prefer to include as many witnesses
as possible. We must always include a current copy in the quorum, so a token is removed from the
CC place and deposited in the QC (quorum copies) place when T16 fires. A token is also deposited
in the DRIVE place, to start the gathering of other representatives.
Transitions T 17, T 18, T19 and T20 represent the inclusion of current witnesses, current copies,
outdated witnesses and outdated copies in the quorum, respectively. We want these transitions to
fire in the order T 17, T 18, T19 and T 20, so they are assigned priorities of 2, 3, 4, and 5 respectively.
We want these transitions to fire only until a majority is reached, so we associate an enabling
function, q with them. The logical function q evaluates to true if a majority of representatives has
been selected to participate in the update transaction; q is the logical complement of q. Transition
T17 will fire once for each current witness, then transition T18 will fire once for each current copy.
Then outdated witnesses will be updated, and if necessary, outdated copies will be made current.
These last two events requires some time, so transitions T19 and T20 are timed. As soon as q
evaluates to true, the gathering will stop, so T 18, T 19, and T20 might not fire as many times as
they are enabled.
When q evaluates to true, transition T 26, with priority 6, is enabled. At this point, the tokens
in places QC and QW (quorum witnesses) denote the participants in the update. The sum of the
tokens in these two places represents a majority of the representatives in the file system.
We want to embed this SPN describing the gathering of a quorum within a larger SPN including
failures, repairs, and the update procedure. We will consider the quorum gathering model as
a subnet activated when a token is deposited in the START place and exited when a token is
deposited in the DONE place. The CC, CW, OW, and OC places will be shared with the larger
net. The shared places are drawn with a double-circle in all the figures. The numbers we used for
the transitions correspond to one of the two instances of the subnet. The numbers for the second
instance are T 21, T 22, T 23, T 24, T 25, T 27, respectively, instead of T 16, T 17, T 18, T 19, T 20, T 26.
6 The File System Model
The overall model for the distributed file system is shown in figure 2, where the contents of each
of the boxes labeled "Form Quorum" is the SPN shown in figure 1, with the exception of the CC,
CW, OC, and OW places, which are shared with the overall model. The left portion of figure 2
models the failure and repair of hosts where copies reside; the right portion models the failure and
repair of hosts where witnesses reside; the center portion models an update transaction. We will
describe each portion separately.
Places CW, DW, and OW (on the far right in the figure) represent the number of witnesses
that are current, down (host has failed), and out-of-date. Transition T2 represents the failure of
the host (we assume that all hosts are identical), transition T4 represents the repair of the host,
and transition T40 represents failure of the host of an out-of-date witness. Since we assume that a
witness goes out-of-date when the host fails, the output place for transition T4 is place OW.
A similar structure for the copies consists of transitions T 1, T3 and T 39, and places CC, DC,
and OC. We assume that a copy on a host that has failed and is subsequently repaired will check
for the existence of a quorum in the file system, possibly bringing itself up-to-date. This action is
represented by the box labeled "Form Quorum 2". When a host on which a copy resides is repaired,
its corresponding token is deposited in the OC place, but a token is also deposited in the TRY place
to represent the attempt to form a quorum. It is then possible that the out-of-date copy needs
to be brought up-to-date before reaching a quorum (transition T41 insures that no more than one
token exists in place TRY). When a token is deposited in place TRY, if it is possible to reach a
quorum, then transition T15 fires and a token is deposited in the START.2 place in the SPN of
figure 1. Also, a token is deposited in place INQ.2, which is simply used to signify that a quorum
is being formed. The QC, DONE, and QW places in the quorum gathering SPN are the same as
those shown in the overall SPN. The CC, CW, OC and OW places in the quorum gathering SPN
are the same as the respective places in the overall SPN. Thus, there are arcs from these places into
the boxes that are not shown explicitly in figure 2; these arcs are shown explicitly in figure 1. After
a quorum is gathered and the out-of-date copy is updated (transition T 13), the representatives are
returned to their respective places (transitions T12 and T 14). Transition T13 has higher priority
than transitions T12 or T 14, so that it fires before them.
The center section of the SPN models the normal update process. Transition T0 models the time
lapse between update requests. When a request is received, a token is deposited in place REQ. If a
quorum can be formed (function f evaluates to true), transition T5 will fire and deposit a token in
the START.1 place. The box labeled "Form Quorum 1" is another instance of the quorum gathering
SPN, with the QC.1, QW.1, and DONE.1 places in common with the overall SPN. Once the quorum
gathering is completed, the remaining current representatives become outdated (transitions T8 and
T 10). When these two transitions have moved all the tokens from CC and CW to OC and OW
respectively, T9 fires, allowing the update to occur. At this point, the representatives may be
updated (transitions T6 and T 7), or the hosts on which they reside may fail (transitions T28 and
T 29), after which we await the next request.
7 Manual Reconstruction of File Configuration
The SPN just defined produces a Markov chain with absorbing states, which correspond to situations
where a quorum cannot be formed because more than a majority of hosts are down. Since we
wanted to analyze the steady-state availability of the system, we introduce a manual intervention
to reconstruct the system when a quorum is not possible. Figure 3 shows the SPN used to model
the manual reconstruction of the file structure. When a request is received and it is not possible
to gather a quorum (function f evaluates to false), transition T30 (in figure 2) fires and deposits a
token in the START place of figure 3, which we now describe. The firings of transitions T31 and
gather together the outdated representatives and deposit the corresponding tokens in places C
and W. The failed hosts must be repaired before their representatives can be gathered; the repair
is represented by timed transitions T36 and T 37. When all the representatives have been gathered,
transitions T33 and T34 may then begin firing, to bring representatives up-to-date. After these
two transitions have emptied places C and W, transitions T38 can fire, putting a token in DONE.
Transition T35 (in figure 2) can then remove this token and the token residing in INREC, restoring
the initial configuration.
8 Model Specification
The model is specified to the solution package in CSPL (C-based Stochastic Petri net Language)
based on the C programming language. A set of predefined functions available for the definitions
of SPN entities distinguishes CSPL from C. Some of these predefined functions are place, trans,
iarc, oarc, and harc, used for defining places, transitions, input arcs, output arcs and inhibitor arcs,
respectively. There are also built-in functions for debugging the SPN and for specifying the solution
method and desired output measures. The enabling, distribution, and probability functions for each
transition are defined by the user as C functions; marking-dependency can be obtained using other
built-in functions such as mark("place"), returning the number of tokens in the specified place.
The enabling functions q and f , used respectively in the quorum gathering subnet (figure 1)
and to trigger the reconstruction subnet (in figure 2) are logical functions defined on the current
marking of the SPN. To keep these functions simple, we defined three places in the overall SPN,
INQ.1, INQ.2, and INREC. There is a token in place INQ.1 if a quorum is being formed for a
normal update; a token is in place INQ.2 if an out-of-date copy is checking to see if a valid quorum
is possible; place INREC holds a token during manual reconstruction of the file system.
The CSPL code describing q and f respectively is the following (in the C programming language
"&&" means "logical and" and ' '!" means "logical not"):
int q() f
else
Thus, q evaluates to true iff more than half of the total number of representatives is in the places
QW and QC (at least one of them is a copy, given the structure of our SPN).
int f() f
if (mark(CC) && !mark(INQ.1) && !mark(INQ.2) &&
else
Thus, if there is a current copy (a token in place CC), and if no other quorum or reconstruction
is in progress (no token in any of places INQ.1, INQ.2 or INREC), and if more representatives are
up than are down, the transition is enabled.
The firing rates for some timed transitions depend on the number of tokens in the input place.
As an example, transition T1 fires at a rate equal to the failure rate of an individual host multiplied
by the number of operational hosts (tokens in place CC).
float
where "phi" is a floating point constant representing the failure rate of a single host. The marking-
dependent firing rates for the other transitions representing host failures are defined similarly.
9 Model Experimentation
For two different quorum definitions, we varied the mean time to repair failed hosts, the number
of copies, and the number of witnesses. For both sets of experiments, we assumed the parameters
listed in table 1 and considered the mean time to repair to be 1 hour, 2 hours, or 8 hours.
In the first set of experiments, we defined a quorum to be the majority of all representatives (a
quorum must include a current copy) as previously discussed. The second set of experiments relaxed
this definition by requiring a quorum to consist of a majority of representatives on operational hosts.
For example, suppose that a total of 3 copies and 2 witnesses were created, but only one of each
is now available and current. Given the requirement that a majority of all representatives must
participate in a quorum, an update cannot be serviced now. However, if we require that a quorum
consist of a majority of representatives on operational hosts, a quorum can be formed using the
available copy and witness. Under the second assumption, updates can continue until the last copy
fails, then a manual reconstruction must be performed. This idea was concurrently investigated by
Jajodia and Mutchler in [Jajodia87], where it was termed dynamic voting. A similar approach is
the adaptive voting used in the design of fault-tolerant hardware systems [Siewiorek82]. To model
adaptive voting instead of static voting, only the definitions of q and f need to be changed, to
eliminate the dependency on places DW and DC.
We analyzed the availability of the system by observing the probability that a token is in
place INREC. This measure represents the steady-state probability that the system is undergoing
manual reconstruction of the file system. The availability of the file system, the complement of this
probability, is shown under the two quorum definitions in table 2 for hours.
The model of the system with one copy had 11 states, while the model of the system with 4 copies
and 3 witnesses had 1746 states.
When hours, we can see how the adaptive voting technique increases the availability
if the total number of representatives is small. This technique may decrease availability if
there are many representatives (see for example the configuration with 5 copies and 2 witnesses).
The availability of the system decreases as copies are replaced by witnesses for both voting techniques
up to a certain point, then it increases again. This behavior can be explained considering
the factors affecting the overall availability.
First of all, replacing copies with witnesses decreases the total number of copies, so the probability
of having all the copies down (regardless of the state of the witnesses) is higher, with a
negative effect on the availability. An especially sharp decrease in the availability is experienced
going from dn=2e + 1 to dn=2e, when n representatives are available (we present only the case n
odd, see [Jajodia87] for a discussion of the case n even), the reason is the quorum policy that we
are assuming. Most of the time no host is down, so, assuming 7, the quorum will contain
two copies and two witnesses in the (5,2) case, and one copy and three witnesses in the (4,3) case.
Having only one copy in the quorum implies that whenever that copy becomes unavailable, the
other copies will be useless, because they will be out-of-date. The increase of availability after the
point dn=2e is less intuitive, the main cause is probably the quorum policy itself (this suggests that
further investigations on the quorum policies and their effect on the availability are needed).
When hours, the adaptive voting technique always increases availability although
the improvement decreases as the number of witnesses increases. With 5 representatives, allowing
one of the representatives to be a witness increases the availability under static voting, but decreases
it under adaptive voting. With 7 representatives, the assignment of two of them to witness status
maximizes availability under static voting. However, the adaptive algorithm still performs better.
Under the assumptions stated, the addition of witnesses to a fixed number of copies does not
increase the availability. This can be seen by comparing the results for 3 copies and 0 witnesses
(0.99908), 3 copies and 2 witnesses (0.95992), and 3 copies and 4 witnesses (0.95663) (see table
adaptive voting). The reason for the decrease in availability when adding extra
witnesses can be explained by examining the procedure used for the selection of the participants
in a quorum. The desire to speed the quorum gathering process gives preference to witnesses in
the quorum, since the update procedure for a witness is fast. However, this preference increases
the probability of copies being out-of-date. Since a current copy is necessary for a quorum and
redundant copies are often out-of-date, a decrease in availability results.
In trying to improve the availability without considering performance, the quorum gathering
procedure would give preference to copies as participants. This can be easily reflected in the SPN
model under consideration. Only the priorities associated with the transitions in figure 1 need to be
changed, to include current copies, outdated copies, current witnesses and then outdated witnesses,
in that order. Table 3 compares the availability with 3 copies and 0, 2, or 4 witnesses with the two
different preferences in the formation of the quorum using the adaptive voting algorithm. In this
case, the addition of witnesses favorably affects the availability. The preference for copies in the
quorum increases the time needed to perform an update, since it takes longer to write a copy than
a witness. In table 4, we compare the steady-state probability of the system being in the "update"
state. We can estimate this probability from the SPN by looking at the probability that a token is
in place INQ.1 (figure 2): a token is more likely to be in place INQ.1 when preference in forming
a quorum is given to copies over witnesses.
Conclusions
We have presented a detailed stochastic Petri net model of a replicated file system for availability
analysis. The model included failure and repair of hosts, file updates, quorum formation, and
manual reconstruction of the file system. The total number of representatives of a file was varied
between one and seven; the composition of the representatives (number of copies and witnesses)
was varied as well.
Using this model, we investigated two different voting algorithms used to maintain file consis-
tency. The first algorithm, static voting, required a majority of all representatives to participate
in an update. The second algorithm, adaptive voting, required a majority of representatives on
operational units to participate in an update. In most cases, the adaptive voting algorithm resulted
in a higher availability of the file system. The effect was more pronounced when the mean time to
repair a host was higher, and thus a smaller number of hosts was available at a given time.
We then examined the process by which actual participants in an update are selected, given
that more than a majority are available. A desire for a fast update process leads to a preference
for witnesses in a quorum, with the effect of decreasing the availability of the file system. This
decrease in availability can be attributed to the fact that representative not partecipants in the
quorum become outdated, and so copies are frequently out-of-date.
To increase the availability of the file system, preference is given to copies as participants in
the update, with an unfavorable effect on performance. Since it takes longer to perform an update
on a copy than on a witness, it takes longer to service each request if more copies are quorum
participants.
Future modeling efforts will look at the effect of converting copies to witnesses and vice versa
and at the effect of using different quorum policies on the availability of the file system. We will
also investigate heuristics for determining when to perform the conversion. We espect to be able
to explore a wide number of policies by making minor changes or additions to the SPN we have
presented.
--R
A class of Generalized Stochastic Petri Nets for the Performance Evaluation of Multiprocessor
The Reliability of Voting Mechanisms.
Extended Stochastic Petri Nets: Applications and Analysis.
The Design of a Unified Package for the Solution of Stochastic Petri Net Models.
Consistency and Recovery Control for Replicated Files.
The Roe File System.
Weighted Voting for Replicated Data.
Dynamic voting.
On the Integration of Delay and Throughput Measures in Distributed Processing Models.
Reseaux de Petri Stochastiques.
Voting with Witnesses: A Consistency Scheme for Replicated Files.
Voting with a Variable Number of Copies.
Detection of Mutual Inconsistency in Distributed Systems.
Petri Net Theory and the Modeling of Systems.
LOCUS: A Network Transparent
The Theory and Practice of Reliable System Design.
Concurrency Control and Consistency of Multiple Copies of Data in Distributed INGRES.
A Majority Consensus Approach for Concurrency Control for Multiple Copy databases.
--TR
A class of generalized stochastic Petri nets for the performance evaluation of multiprocessor systems
The reliability of voting mechanisms
Dynamic voting
A Majority consensus approach to concurrency control for multiple copy databases
Consistency and recovery control for replicated files
Petri Net Theory and the Modeling of Systems
Extended Stochastic Petri Nets
The Design of a Unified Package for the Solution of Stochastic Petri Net Models
Weighted voting for replicated data
LOCUS a network transparent, high reliability distributed system
--CTR
G. Chiola , M. A. Marsan , G. Balbo , G. Conte, Generalized Stochastic Petri Nets: A Definition at the Net Level and its Implications, IEEE Transactions on Software Engineering, v.19 n.2, p.89-107, February 1993
Yiannis E. Papelis , Thomas L. Casavant, Specification and analysis of parallel/distributed software and systems by Petri nets with transition enabling functions, IEEE Transactions on Software Engineering, v.18 n.3, p.252-261, March 1992
Yuan-Bao Shieh , Dipak Ghosal , Prasad R. Chintamaneni , Satish K. Tripathi, Modeling of Hierarchical Distributed Systems with Fault-Tolerance, IEEE Transactions on Software Engineering, v.16 n.4, p.444-457, April 1990
Changsik Park , John J. Metzner, Efficient Location of Discrepancies in Multiple Replicated Large Files, IEEE Transactions on Parallel and Distributed Systems, v.13 n.6, p.597-610, June 2002 | performance reliability tradeoffs;concurrency control;majority protocols;distributed databases;fault tolerant computing;file status;voting algorithm;witnesses;stochastic Petri net model;replicated file system;distributed environment;petri nets |
634719 | Recurrence equations and their classical orthogonal polynomial solutions. | The classical orthogonal polynomials are given as the polynomial solutions pn(x) of the differential equation (x)y''(x) is a polynomial of at most second degree and (x) is a polynomial of first degree.In this paper a general method to express the coefficients An, Bn and Cn of the recurrence equation in terms of the given polynomials (x) and (x) is used to present an algorithm to determine the classical orthogonal polynomial solutions of any given holonomic three-term recurrence equation, i.e., a homogeneous linear three-term recurrence equation with polynomial coefficients.In a similar way, classical discrete orthogonal polynomial solutions of holonomic three-term recurrence equations can be determined by considering their corresponding difference equation (x)y(x) denote the forward and backward difference operators, respectively, and a similar approach applies to classical q-orthogonal polynomials, being solutions of the q-difference equation denotes the q-difference operator. | Introduction
Families of orthogonal polynomials p n (x) (corresponding to a positive-definite measure) satisfy
a three-term recurrence equation of the form
with C n A n A e.g. [6, p. 20]. Moreover, Favard's theorem states that the converse
is also true.
On the other hand, in practice one is often interested in an explicit solution of a given
recurrence equation. Therefore it is an interesting question to ask whether a given recurrence
equation has classical orthogonal polynomial solutions.
In this paper an algorithm is developed which answers this question for a large class
of classical orthogonal polynomial systems. Furthermore, we present results of our corresponding
Maple implementation retode and compare these with the Maple implementation
rec2ortho of Koornwinder and Swarttouw [13]. These programs overlap, but rec2ortho
does not cover Bessel, Hahn and q-polynomials, whereas retode does not include the Meixner-
case.
Classical Orthogonal Polynomials
A family
of polynomials of degree exactly n is a family of classical continuous orthogonal polynomials
if it is the solution of a differential equation of the type
is a polynomial of at most second order and
is a polynomial of first order ([4], [14]). Since one demands that p n (x) has exact degree n,
by equating the coefficients of x n in (3) one gets
Similarly, a family p n (x) of polynomials of degree exactly n, given by (2), is a family of
classical discrete orthogonal polynomials if it is the solution of a difference equation of the
type
where
denote the forward and backward difference operators, respectively, and
and are again polynomials of at most second and of first order, respectively,
see e.g. [19]. Again, (4) follows.
Finally, a family p n (x) of polynomials of degree exactly n, given by (2), is a family of
classical q-orthogonal polynomials if it is the solution of a q-difference equation of the type
where
denotes the q-difference operator [7], and are again
polynomials of at most second and of first order, respectively. By equating the coefficients
of x n in (6) one gets
where the abbreviation
denotes the so-called q-brackets. Note that lim q!1
It can be shown (see e.g. [15]) that any solution p n (x) of either (3), (5) or (6) satisfies a
recurrence equation (1).
The following is a general procedure to find the coefficients of the recurrence equation (as
well as of similar structural formulas for classical orthogonal polynomials, see [11]) in terms
of the coefficients a; b; c; d and e of oe(x) and -(x):
1. Substitute p n
in the differential equation (3), in
the difference equation (5) or in the q-difference equation (6), respectively.
2. Equating the coefficients of x n yields - n , given by (4) and (7), respectively.
3. Equating the coefficients of x
respectively, as rational
multiples of k n .
4. Substitute p n (x) in the proposed equation, and equate again the three highest coeffi-
cients. In the case of the recurrence equation (1), this yields
and
e
by linear algebra.
5. Substituting the values of k 0
n given in step (3) in these equations yields the
three unknowns in terms of a; b; c; d; e; n; k
With regard to the recurrence equation coefficients, we collect these results in
Theorem 1 Let p n family of polynomial solutions of the
differential equation (3). Then the recurrence equation (1) is valid with
and
in terms of the coefficients a; b; c; d and e of the given differential equation.
family of polynomial solutions of the difference
equation (5). Then the recurrence equation (1) is valid with
and
in terms of the coefficients a; b; c; d and e of the given difference equation.
family of polynomial solutions of the q-difference
equation (6). Then the recurrence equation (1) is valid with
a
a
and
in terms of the coefficients a; b; c; d and e of the given q-difference equation. 2
3 The Inverse Characterization Problem
It is well-known ([4], see also [5], [14]) that polynomial solutions of (3) can be classified
according to the zeros of oe(x), leading to the normal forms of Table 1 besides linear transformations
x 7! Ax+B. The type of differential equation that we consider is invariant under
such a transformation.
family
1.
2. 1 \Gamma2x H n (x) Hermite polynomials
3. x \Gammax
5.
Table
1: Normal Forms of Polynomial Solutions
This shows that the only orthogonal polynomial solutions are linear transforms of the
Hermite, Laguerre, Bessel and Jacobi polynomials (for details see e.g. [11]), hence using a
mathematical dictionary one can always deduce the recurrence equation. Note, however,
that this approach except than being tedious may require the work with radicals, namely
the zeros of the quadratic polynomial oe(x), whereas our approach is completely rational:
Given k n+1 =k n 2 Q(n), the recurrence equation is given rationally by Theorem 1.
Moreover, Theorem 1 represents the recurrence equation by a unique formula. It is valid
also in the cases of Table 1:1 and 4a, with the trivial solution p In both cases we
have the recurrence equation p n+1
Now, we will use the fact that these equations are given explicitly to solve an inverse
problem.
Assume one knows that a polynomial system satisfies a differential equation (3). Then
by the classification of Table 1 it is easy to identify the system. On the other hand, given
an arbitrary holonomic three-term recurrence equation
it is less obvious to find out whether there is a polynomial
system
satisfying (15), being a linear transform of one of the classical systems (Hermite, Laguerre,
Jacobi, Bessel), and to identify the system in the affirmative case. In this section we present
an algorithm for this purpose. Note that Koornwinder and Swarttouw [13] have also considered
this question and in their Maple implementation rec2ortho propose a solution based
on the careful ad hoc analysis of the input polynomials q n ; r n ; and s n . Their Maple implementation
deals with the following families: Hermite, Charlier, Laguerre, Meixner-Pollaczek,
Meixner, Krawtchouk, and Jacobi.
Let us start with a recurrence equation of type (15). Without loss of generality we assume
that neither q has a nonnegative integer zero w.r.t. n. Otherwise, a suitable
shift can be applied, see Algorithm 1 and Example 1.
Therefore, in the sequel we assume that the recurrence equation
is valid, but neither q nonnegative integer
zeros. We search for solutions
Next, we divide (16) by q n (x), and replace n by n \Gamma 1. This brings (16) into the form
For being a linear transform of a classical orthogonal system, there is a recurrence
equation (1)
therefore (18) and (19) must agree. We would like to conclude that t n
This follows if we can show that p n (x)=p x). For a proof of this
assertion, see [10].
Therefore we can conclude that t n Hence if (18)
does not have this form, i.e., if either t n (x) is not linear in x or u n (x) is not a constant
with respect to x, we see that p n (x) cannot be a linear transform of a classical orthogonal
polynomial system. In the positive case, we can assume the form (19).
Since we propose solutions (17), equating the coefficients of x n+1 in (19) we get
(v
Hence the given A generates the term ratio k n+1 =k n . In particular k n turns
out to be a hypergeometric term, (i.e., k n+1 =k n is rational,) and is uniquely determined by
(20) up to a normalization constant k the zeros of w n are a subset of the
zeros of q is defined by (20) for all n 2 N from k 0 .
In the next step we can eliminate the dependency of k n by generating a recurrence
equation for the corresponding monic polynomials e
. For e p n (x), we get by
e
A n
e
e
e
with
e
A n
Then our formulas (9)-(10) read in terms of e
e
d)
and
e
\Gamman d)
(a
and these are independent of k n by construction.
Now we would like to deduce a; b; c; d and e from (21)-(22). Note that as soon as we have
found these five values, we can apply a linear transform (according to the zeros of oe(x)) to
bring the differential equation in one of the normal forms of Table 1 which finally gives us
the desired information.
We can assume that e
C n are in lowest terms. If the degree of either the numerator
or the denominator of e
B n is larger than 2, then by (21) p n (x) is not a classical system.
Similarly, if the degree of either the numerator or the denominator of e
C n is larger than 4, by
(22) the same conclusion follows.
Otherwise we can multiply (21) and (22) by their common denominators, and bring them
therefore in polynomial form. Both resulting equations must be polynomial identities in the
variable n, hence all of their coefficients must vanish. This gives a nonlinear system of
equations for the unknowns a; b; c; d and e. Any solution of this system with not both a
and d being zero yields a differential equation (3), and hence given such a solution one can
characterize it via Table 1. Therefore our question can be resolved in this case. In particular,
if one of the cases Table 1.1 or 1.4a applies, then there are no orthogonal polynomial solutions.
If the nonlinear system does not have such a solution, we deduce that no such values
a; b; c; d and e exist, hence no such differential equation is satisfied by p n (x), implying that
the system is not a linear transformation of a classical orthogonal polynomial system.
Hence the whole question boils down to decide whether the given nonlinear system has
nontrivial solutions, and to find these solutions in the affirmative case. As a matter of
fact, with Gr-obner bases methods, this question can be decided algorithmically ([16]-[18]).
Such an algorithm is implemented, e.g., in the computer algebra system REDUCE [17], and
Maple's solve command can also solve such a system.
Note that the solution of the nonlinear system is not necessarily unique. For example,
the Chebyshev polynomials of the first and second kind T n (x) and U n (x) satisfy the same
recurrence equation, but a different differential equation. We will consider this example in
more detail later.
If we apply this algorithm to the recurrence equation p n+2 of the power
generates the complete solution set, given by Table 1:1 and 1:4a.
The following statement summarizes the above considerations.
Algorithm 1 This algorithm decides whether a given holonomic three-term recurrence
equation has shifted, linear transforms of classical orthogonal polynomial solutions, and
returns their data if applicable.
1. Input: a holonomic three-term recurrence equation
2. Shift: Shift by
nonnegative integer zero
is a zero of q
3. Rewriting: Rewrite the recurrence equation in the form
If either t n (x) is not a polynomial of degree one in x or u n (x) is not constant with respect to
x, then return "no orthogonal polynomial solution exists"; exit.
4. Standardization: Given now A
define
(v
according to (20).
5. Make monic: Set
e
A n
and bring these rational functions in lowest terms. If the degree of either the numerator or the
denominator of e
B n is larger than 2, or if the degree of either the numerator or the denominator
of e
C n is larger than 4, return "no classical orthogonal polynomial solution exists";
exit.
6. Polynomial
e
d)
and
e
d)
(a
using the as yet unknowns a; b; c; d and e. Multiply these identities by their common denom-
inators, and bring them therefore in polynomial form.
7. Equating Coefficients: Equate the coefficients of the powers of n in the two resulting
equations. This results in a nonlinear system in the unknowns a; b; c; d and e. Solve this
system by Gr-obner bases methods. If the system has no solution or only one with
then return "no classical orthogonal polynomial solution exists"; exit.
8. Output: Return the classical orthogonal polynomial solutions of the differential equations
(3) given by the solution vectors (a; b; c; d; e) of the last step, according to the classification
of
Table
1, together with the information about the standardization given by (20). This
information includes the density
ae(x)
e
R -(x)
(see e.g. [14]), and the supporting interval through the zeros of oe(x). 1 2
Remark Assume that a given recurrence equation contains parameters. Then our implementation
determines for which values of the parameters there are orthogonal polynomial
solutions, by solving not only for a; b; c; d and e, but moreover for those parameters.
Example 1 As a first example, we consider the recurrence equation
1 If the zeros of oe(x) are not real, then these orthogonal polynomials are not positive-definite. The Bessel
system is never positive-definite [2].
we see that the shift p n (x) := P n+1 (x) is necessary, i.e. (23). For
we have the recurrence equation
In the first steps this recurrence equation is brought into the form
hence
and therefore
Moreover, for monic e
e
hence e
1. The polynomial identities concerning e
C n of step 5 of the
algorithm yield
At this point we have already determined
Hence possible classical orthogonal polynomial solutions of (24) are defined in the interval
In the first of the above cases, i.e. for and the differential equation
corresponding to the density
e
R -(x)
The corresponding orthogonal polynomials are multiples of translated Chebyshev polynomials
of the first kind
(see e.g. [1],
Table
22.2, and (22.5.11); C n (x) are monic, but C
hence finally
In the second of the above cases, i.e. for one gets the equation
a
with two possible solutions that give the differential equations
and
They correspond to the densities
r
and
r
respectively, hence the orthogonal polynomials are multiples of the Jacobi polynomials
Finally, in the third of the above cases, i.e. for
corresponding to the density
e
R -(x)
The corresponding orthogonal polynomials are multiples of translated Chebyshev polynomials
of the second kind
(see e.g. [1],
Table
22.2, and (22.5.13); S n (x) are monic, see also Table 22.8), hence
U
We see that the recurrence equation (24) has four different (shifted) linearly transformed
classical orthogonal polynomial solutions!
Using our implementation, these results are obtained by
strict:=true:
Warning: several solutions found
@
@x
@
@x
@
@x
@x
which gives the corresponding differential equations, the intervals and densities, as well as
the the term ratio k n+1 =k
With Koornwinder-Swarttouw's rec2ortho, these results are obtained by the statements
rec2ortho((n+2)/(n+1),0,n/(n+1)), rec2ortho((n+2)/(n+1),0,n/(n+1),4,0),
rec2ortho((n+2)/(n+1),0,n/(n+1),2,-1), and rec2ortho((n+2)/(n+1),0,n/(n+1),2,1),
respectively. Note that here the user must know the initial values to determine possible orthogonal
polynomial solutions, whereas our approach finds all possible solutions at once.
Example 2 As a second example, we consider the recurrence equation
depending on the parameter ff 2 R. Here obviously the question arises whether or not
there are any instances of this parameter for which there are classical orthogonal polynomial
solutions. In step 6 of Algorithm 1 we therefore solve also for this unknown parameter. This
gives a slightly more complicated nonlinear system, with the unique solution
ae
oe
Hence the only possible value for ff with classical orthogonal polynomial solutions is
in which case one gets the differential equation
with density
in the interval [\Gamma1=2; 1], corresponding to linearly transformed Laguerre polynomials.
Using our implementation, these results are obtained by
strict:=false:
Warning : parameters have the values;
[2
@
@x
With Koornwinder-Swarttouw's rec2ortho, this result can also be obtained. On the other
hand, the Bessel polynomials are not accessible with Koornwinder-Swarttouw's rec2ortho.
4 Classical Discrete Orthogonal Polynomials
In this section, we give similar results for classical orthogonal polynomials of a discrete
variable (see Chapter 2 of [19]). The classical discrete orthogonal polynomials are given by
a difference equation (5).
family
1. 1 ff x
translated Charlier
2a. x 0 x n falling factorial
3. x - (fl
4. x p
pols.
5.
pols.
Table
2: Normal Forms of Discrete Polynomials
These polynomials can be classified similarly as in the continuous case according to
the functions oe(x) and -(x); up to linear transformations the classical discrete orthogonal
polynomials are classified according to Table 2 (compare [19], Chapter 2). In particular, case
(2a) corresponds to the non-orthogonal solution x n in Table 1. Similarly as for the powers
d
dx
the falling factorials x n :=
It turns out that they are connected with the Charlier polynomials by the limiting process
lim
\Gamman
where we used the hypergeometric representation given in [19, (2.7.9)].
Note, however, that other than in the differential equation case the above type of difference
equation is not invariant under general linear transformations, but only under integer
shifts. We will have to take this under consideration.
The classical discrete orthogonal polynomials satisfy a recurrence equation (1)
with A given by Theorem 1.
Similarly as in the continuous case, this information can be used to generate an algorithm
to test whether or not a given holonomic recurrence equation has classical discrete orthogonal
polynomial solutions. Obviously the first three steps of this algorithm agree with those given
in Algorithm 1.
Algorithm 2 This algorithm decides whether a given holonomic three-term recurrence
equation has classical discrete orthogonal polynomial solutions, and returns their data if
applicable.
1. Input: a holonomic three-term recurrence equation
2. Shift: Shift by maxfn 2 N 0 jn is zero of either q necessary.
3. Rewriting: Rewrite the recurrence equation in the form
If either t n (x) is not a polynomial of degree one in x or u n (x) is not constant with respect to
return "no orthogonal polynomial solution exists"; exit.
4. Linear Transformation: Rewrite the recurrence equation by the linear transformation
x 7! x\Gammag
f
with (as yet) unknowns f and g.
5. Standardization: Given now A
define
(v
according to (8).
6. Make monic: Set
e
A n
and bring these rational functions in lowest terms. If the degree of either the numerator or
the denominator of e
B n is larger than 2, if the degree of the numerator of e
C n is larger than
6, or if the degree of the denominator of e
C n is larger than 4, then return "no classical
discrete orthogonal polynomial solution exists"; exit.
7. Polynomial Identities: Set
e
according to (11), and
e
according to (12), in terms of the unknowns a; b; c; d; e; f and g. Multiply these identities by
their common denominators, and bring them therefore in polynomial form.
8. Equating Coefficients: Equate the coefficients of the powers of n in the two resulting
equations. This results in a nonlinear system in the unknowns a; b; c; d; e; f and g. Solve this
system by Gr-obner bases methods. If the system has no solution, then return "no classical
discrete orthogonal polynomial solution exists"; exit.
9. Output: Return the classical orthogonal polynomial solutions of the difference equations (5)
given by the solution vectors (a; b; c; d; e; f; g) of the last step, according to the classification
given in Table 2, together with the information about the standardization given by (8). This
information includes the necessary linear transformation g, as well as the discrete
weight function ae(x) given by
ae(x)
(see e.g. [19]).
Proof: The proof is an obvious modification of Algorithm 1. The only difference is that
we have to take a possible linear transformation fx into consideration since the difference
equation (5) is not invariant under those transformations. This leads to step 4 of the
algorithm. 2
Note that an application of Algorithm 2 to the recurrence equation p n+2
which is valid for the falling factorial p n generates the difference
equation x\Deltarp n of Table 2:2a.
Example 3 We consider again the recurrence equation (31)
depending on the parameter ff 2 R. This time, we are interested in classical discrete orthogonal
polynomial solutions.
According to step 4 of Algorithm 2, we rewrite (31) using the linear transformation
x 7! x\Gammag
f
with as yet unknowns f and g. Step 5 yields the standardization
=f
In step 8, we solve the resulting nonlinear system for the variables fa; b; c; d; e; f; resulting
in
ae
d
d
d
oe
This is a rational representation of the solution. However, since we assume ff to be arbitrary,
we solve the last equation for b. This yields
which cannot be represented without radicals. Substituting this into (32) yields the solution
ae
d
oe
d and e being arbitrary. It turns out that for ff ! 1=4 this corresponds to Meixner or
Krawtchouk polynomials.
With Koornwinder-Swarttouw's rec2ortho, this result can be also obtained. Moreover,
rec2ortho determines that for ff ? 1=4 one gets Meixner-Pollaczek polynomials. These
polynomials are not accessible by our approach.
Example 4 Here we want to discuss the possibility that a given recurrence equation might
have several classical discrete orthogonal solutions. Whereas the recurrence equation of the
Hahn polynomials h (ff;fi)
(x; N) has (besides several linear transformations) only this single
classical discrete orthogonal solution, the case results in two essentially different
solutions.
Here one has the recurrence equation
An application of Algorithm 2 shows that this recurrence equation corresponds to the two
different difference equations
and
Using our implementation, these results are obtained by
strict:=true:
Warning : parameters have the values;
ag
Warning : parameters have the values;
ag
Warning: several solutions found
Note that Hyperterm(upper,lower,z,x) denotes the hypergeometric term (= summand)
of the hypergeometric function hypergeom(upper,lower,z) with summation variable x, see
[9].
Hahn polynomials are not accessible with Koornwinder-Swarttouw's rec2ortho.
5 Classical q-Orthogonal Polynomials
In this section, we consider the same problem for classical q-orthogonal polynomials ([7],
[12], see e.g. [8]). The classical q-orthogonal polynomials are given by a q-difference equation
(6).
These polynomials can be classified similarly as in the continuous and discrete cases according
to the functions oe(x) and -(x); up to linear transformations the classical q-orthogonal
polynomials are classified according to Table 3.
family
1.
2.
3. pols.
4. 1 a+1\Gammax
(a)
pols.
5. x xq+a+q
7. x
pols.
8.
9.
pols.
alternative q-Charlier pols.
11.
pols.
12.
pols.
13.
U (a)
pols.
14.
15. (q
Table
3: Normal Forms of q-Polynomials
For the sake of completeness we have included all families from [8], Chapter 3, although
they overlap in several instances. The non-orthogonal polynomial solutions are the powers
x n and the q-Pochhammer functions
The classical q-orthogonal polynomials satisfy a recurrence equation (1)
with A given by Theorem 1.
Similarly as in the continuous and discrete cases, this information can be used to generate
an algorithm to test whether or not a given holonomic recurrence equation has classical q-
orthogonal polynomial solutions.
Algorithm 3 This algorithm decides whether a given holonomic three-term recurrence
equation has classical q-orthogonal polynomial solutions, and returns their data if applicable
1. Input: a holonomic three-term recurrence equation
2. Shift: Shift by maxfn 2 N 0 jn is zero of either q necessary.
3. Rewriting: Rewrite the recurrence equation in the form
If either t n (x) is not a polynomial of degree one in x or u n (x) is not constant with respect to
return "no q-orthogonal polynomial solution exists"; exit.
4. Linear Transformation: Rewrite the recurrence equation by the linear transformation
x 7! x\Gammag
f
with (as yet) unknowns f and g.
5. Standardization: Given now A
define
(v
6. Make monic: Set
e
A n
and bring these rational functions in lowest terms. If the degree (w.r.t N := q n ) of the
numerator of e
B n is larger than 3, the degree of the denominator of e
B n is larger than 4, the
degree of the numerator of e
C n is larger than 7, or the degree of the denominator of e
C n is
larger than 8, then return "no classical q-orthogonal polynomial solution exists";
exit.
7. Polynomial Identities: Set
e
according to (13), and
e
according to (14), in terms of the unknowns a; b; c; d; e; f and g. Multiply these identities by
their common denominators, and bring them therefore in polynomial form.
8. Equating Coefficients: Equate the coefficients of the powers of in the two resulting
equations. This results in a nonlinear system in the unknowns a; b; c; d; e; f and g. Solve this
system by Gr-obner bases methods. If the system has no solution, then return "no classical
q-orthogonal polynomial solution exists"; exit.
9. Output: Return the q-classical orthogonal polynomial solutions of the q-difference equations
given by the solution vectors (a; b; c; d; e; f; g) of the last step, according to the classification
given in Table 3, together with the information about the standardization given by
(8). This information includes the necessary linear transformation as well as the
q-discrete weight function ae(x) given by
ae(qx)
ae(x)
Proof: The proof is an obvious modification of Algorithms 1 and 2. 2
Example 5 We consider the recurrence equation
depending on the parameter ff 2 R. This time, we are interested in classical q-orthogonal
polynomial solutions.
According to step 4 of Algorithm 3, we rewrite (31) using the linear transformation
x 7! x\Gammag
f
with as yet unknowns f and g. Step 5 yields the standardization
=f
In step 8, we solve the resulting nonlinear system for the variables fa; b; c; d; e; f; resulting
in the following nontrivial solution
that corresponds-for 1-to the q-difference equation
x
Hence for every ff 2 R and every scale factor f there is a q-classical solution that corresponds
to q-Hermite I polynomials, see Table 3, which have real support for ff ! 0.
Using our implementation, these results are obtained by
Warning : parameters have the values;
ae(x)
Note that q-polynomials are not accessible with Koornwinder-Swarttouw's rec2ortho.
Note
The Maple implementation retode, and a worksheet retode.mws with the examples of this
article can be obtained from http://www.imn.htwk-leipzig.de/~koepf/research.html.
Acknowledgment
The first named author thanks Tom Koornwinder and Ren'e Swarttouw for helpful discussions
on their implementation rec2ortho [13]. Examples 2 and 4 given by recurrence equation
were provided by them. Thanks to the support of their institutions I had a very pleasant
and interesting visit at the Amsterdam universities in August 1996.
--R
Handbook of Mathematical Functions.
The Bessel polynomials.
A set of orthogonal polynomials that generalize the Racah coefficients or 6-j symbols
On polynomial solutions of a class of linear differential equations of the second order.
An Introduction to Orthogonal Polynomials.
The Askey-scheme of hypergeometric orthogonal polynomials and its q-analogue
Hypergeometric Summation.
Algorithms for classical orthogonal polynomials.
Representations of orthogonal polynomials.
Compact quantum groups and q-special functions
rec2ortho: an algorithm for identifying orthogonal polynomials given by their three-term recurrence relation as special functions
Die Charakterisierung der klassischen orthogonalen Polynome durch Sturm- Liouvillesche Differentialgleichungen
Solving polynomial equation systems by Groebner type methods.
Algebraic solution of nonlinear equation systems in REDUCE.
On decomposing systems of polynomial equations with finitely many solutions.
Classical orthogonal polynomials of a discrete variable.
--TR
Representations of orthogonal polynomials | q-difference equation;structure formula;computer algebra;maple;differential equation |
634745 | Monotonicity and Collective Quantification. | This article studies the monotonicity behavior of plural determiners that quantify over collections. Following previous work, we describe the collective interpretation of determiners such as all, some and most using generalized quantifiers of a higher type that are obtained systematically by applying a type shifting operator to the standard meanings of determiners in Generalized Quantifier Theory. Two processes of counting and existential quantification that appear with plural quantifiers are unified into a single determiner fitting operator, which, unlike previous proposals, both captures existential quantification with plural determiners and respects their monotonicity properties. However, some previously unnoticed facts indicate that monotonicity of plural determiners is not always preserved when they apply to collective predicates. We show that the proposed operator describes this behavior correctly, and characterize the monotonicity of the collective determiners it derives. It is proved that determiner fitting always preserves monotonicity properties of determiners in their second argument, but monotonicity in the first argument of a determiner is preserved if and only if it is monotonic in the same direction in the second argument. We argue that this asymmetry follows from the conservativity of generalized quantifiers in natural language. | Introduction
Traditional logical studies of quantification in natural language concentrated on the
interactions between quantifiers and distributive predicates - predicates that describe
properties of atomic entities. Generalized Quantifier Theory (GQT), as was
applied to natural language semantics in the influential works of Barwise and Cooper (1981),
Benthem (1984) and Keenan and Stavi (1986), followed this tradition and concentrated
on 'atomic' quantification. The framework that emerges from these
works provides a general treatment of sentences such as the following.
(1) All the students are happy. Some girls arrived. No pilot is hungry. Most
teachers are Republican. Exactly five boys smiled. Not all the children
sneezed.
In these sentences, the denotations of both the nominal (e.g. students, girls, etc.)
and the verb phrase (e.g. be happy, arrived, etc.) are traditionally treated as distributive
predicates, which correspond to subsets of a domain of (arbitrary) atomic
entities. Standard GQT assigns determiners such as all, some and most denotations
that are relations between such sets of atomic entities.
While this general treatment is well-motivated, it does not account for the interactions
between quantifiers and collective predicates. Consider for instance the
following sentences.
(2) All the colleagues cooperated. Some girls sang together. No pilots dispersed.
Most of the sisters saw each other. Exactly five friends met at the restaurant.
Not all the children gathered.
According to most theories of plurals, nominals such as colleagues, sisters and
friends and verb phrases such as cooperated, gathered and saw each other do not
denote sets of atomic entities, but rather sets of collections of such entities. There
are various theories about the algebraic structure of such collections, but for our
purposes in this article it is sufficient to assume that collections are sets of atomic
entities. Thus, we assume that collective predicates denote sets of sets of atomic
entities. Consequently, the standard denotation of determiners in GQT as relations
between sets of atoms is not directly applicable to sentences with collective predicates
Early contributions to the study of collective quantification in natural language,
most notably Scha (1981), propose that meanings of 'collective statements' as in
(2) are derived using 'collective' denotations of determiners. 1 More recent works,
including among others Van der Does (1992,1993), Dalrymple et al. (1998) and
Winter (1998,2001), , propose to derive such collective meanings of determiners
from their standard distributive denotations in GQT using general mappings that
apply to these distributive meanings. In the works of Van der Does and Winter, type
shifting operators apply to a standard determiner denotation D, which ranges over
atomic entities, and derives a determiner of a higher type O(D), which ranges over
sets of atomic entities. 2 The study of collective quantification as in (2) is reduced in
these theories to the study of the available O mapping(s) from standard determiners
to determiners over collections. We follow Winter (2001) and adopt one general
type shifting principle for collective quantification that unifies Scha's and Van der
Does' `neutral' and 'existential' liftings of determiners into one operator. This
operator is referred to as determiner fitting (dfit).
1 For earlier works on plural quantification within Montague Grammar see Bennett (1974) and
Hauser (1974).
As we shall see below, the bounded composition operator that Dalrymple et al. propose can also
be cast to a type shifting operator on determiners.
The type shifting approach establishes a connection between standard GQT
and linguistic theories of plurality. A natural question that arises in this context is:
what are the relations between semantic properties of standard quantifiers in GQT
and properties of their 'collectivized' version? In this article we concentrate on
the monotonicity properties of determiners, which, as far as standard distributive
quantification is concerned, are one of the best studied aspects of quantification in
natural language. 3 Consider for instance the simple valid entailments (denoted by
')') in (3), with the determiner all.
a. All the students are happy ) All the rich students are happy.
b. All the students are very happy ) All the students are happy.
Intuitively, the entailments in (3a-b) show that, in simple sentences, the determiner
all licenses a replacement of its first argument (students) by any subset of this
argument (e.g. rich students), and licenses a replacement of its second argument
(very happy) by any superset of this argument (e.g. happy). Thus, the determiner
all is classified as downward monotone in its first argument but upward monotone
in its second argument.
The starting point for the investigations in this work is the observation that such
monotonicity entailments are not always preserved when the determiner quantifies
over collections. Consider for instance the contrast between the sound entailment
in (3a) and the invalid entailment in (4) below.
(4) All the students drank a whole glass of beer together All the rich students
drank a whole glass of beer together.
In a situation where the students are s 1
and s 3
and the rich students are s 1
and
assume that the group fs drank a whole glass of beer together, but
no other group did. In this situation, the antecedent in (4) is obviously true, but
the consequent is false. However, as we shall see, many other plural determiners
do not lose their monotonicity properties when they apply to collective predicates.
This variation calls for a systematic account of the monotonicity properties of determiners
in their collective usage, in relation to their monotonicity properties in
standard GQT.
The aim of this work is to study these relations in detail. We will prove that, under
the adopted determiner fitting operator, 'monotonicity loss' with all is strongly
3 See Ladusaw (1979), Fauconnier (1978) and much recent work on the linguistic centrality of
monotonicity for describing the distribution of negative polarity items like any or ever. For instance,
in correlation with the monotonicity properties of all as reflected in (3), the negative polarity item
ever can appear in the nominal argument of all (e.g. in (i) below), in which it is downward monotone,
but not in the verb phrase argument (e.g. in (ii)), where all is upward monotone:
(i) All the [students who have ever visited Haifa][came to the meeting].
(ii) \LambdaAll the [students who came to the meeting][have ever visited Haifa].
connected to the fact that the monotonicity properties of this determiner are different
in its two arguments. We show that determiner fitting preserves the monotonicity
properties of determiners in their second argument, and it further preserves
monotonicity properties of determiners which have the same monotonicity properties
in both arguments. However, with determiners such as all, not all, some but not
all, and either all or none (of the), which are monotone in their first argument, but
have a different monotonicity property in the second argument, monotonicity in the
first argument is not preserved under determiner fitting. We claim that the origin of
these (empirically welcome) results is in the 'neutral' process that Scha proposed
for collective quantification, and that the combination of this treatment with an 'ex-
istential' lifting, which is empirically well-motivated, has no effects whatsoever on
the (non-)preservation of (non-)monotonicity with collective quantifiers.
The structure of the rest of this article is as follows. Section 2 reviews some
familiar notions from GQT that are used in subsequent sections. Section 3 describes
previous treatments of collective quantification and the uniform type shifting
strategy that is adopted in this paper. Section 4 establishes the facts pertaining
to (non-)preservation of (non-)monotonicity under type shifting with all possible
monotonicity properties of determiners in GQT.
2 Notions from generalized quantifier theory
This section reviews some familiar notions from standard GQT that are important
for the developments in subsequent sections. For an exhaustive survey of standard
GQT see Keenan and Westerst-ahl (1996).
The main property of quantifiers that is studied in this article is monotonicity,
which is a general concept that describes 'order preserving' properties of functions
over partially ordered domains.
partially
ordered sets, and let f be a function from A 1
to B. The function f is called
upward (downward) monotone in its i-th argument iff for all a 1 2 A
We say that f is monotone in its i-th argument iff f is either upward or downward
monotone in its i-th argument.
Extensional denotations are given relative to an arbitrary non-empty finite set
E, to which we refer as the domain of atomic entities, or simply atoms. Given a
non-empty domain E, a determiner over E is a function from -(E) \Theta -(E) to
1g. 4 Hence, a determiner is a relation between subsets of E. The set -(E),
the power set of E, is ordered by set inclusion. The set f0; 1g, the domain of
truth values, is ordered by implication, which is simply the numerical '-' order on
4 Later in the paper, we refer to such determiners as Atom-Atom determiners, since both their
arguments are sets of atomic entities.
Determiner Denotation: for all A; B ' E: Monotonicity
all all 0
not all (:all 0
some some 0
no no
most most 0
exactly five exactly 5 0
Table
1: standard denotations of some determiners
1g. Since a determiner is a two-place function, we use the terms left monotonicity
and right monotonicity for referring to its monotonicity in the first and second
arguments respectively. We use the following notation:
ffl "MON, #MON and -MON for determiners that are upward left-monotone,
downward left-monotone and not left-monotone, respectively.
ffl MON", MON# and MON- for determiners that are upward right-mono-
tone, downward right-monotone and not right-monotone, respectively.
We combine these two notations, and say for instance that the determiner all is
#MON" according to its definition as the subset relation. The standard denotations
that are assumed for some determiners are given in table 1, together with their
monotonicity properties.
The well-known conservativity property of determiners reflects the observation
that the truth value that determiners in natural language assign to any pair of sets A
and B is identical to the truth value they assign to A and A " B. This is observed
in (seemingly obvious) equivalences such as the following.
All the/some/no/most of the/exactly five cars are blue ,
All the/some/no/most of the/exactly five cars are blue cars.
Formally, conservativity of determiner functions is defined as follows.
determiner D over E is conservative (CONS) iff
for all
Thus, in order to evaluate the truth value of a sentence D students are hungry, with
a conservative determiner D, we do not have to know the set of all hungry entities,
but only the set of hungry students.
Let us mention a useful constancy aspect of the meaning of most natural language
determiners. 5 Van Benthem (1984) defines the permutation invariant (PI)
determiners as follows.
5 The exceptions to PI are determiners such as my, her or this, which we henceforth ignore.
Definition 3 (permutation invariance) A determiner D over E is permutation invariant
(PI) iff for all permutations - of E and for all
Roughly speaking, when a determiner D is PI, this means that it is not sensitive to
the identity of the members in its arguments, but only to set-theoretical relations
between its arguments. It is easy to verify that the determiners in table 1 are all PI.
The following two definitions characterize two trivial classes of determiners.
Definition 4 (left/right triviality) A determiner D over E is called left (right)
trivial iff for all A; B; C ' E:
respectively).
Intuitively, an LTRIV (RTRIV) determiner is insensitive to the identity of its left
(right) argument. For instance, the determiners less than zero and at least zero,
which are both LTRIV and RTRIV, assign the same truth value (0 and 1 respec-
tively) to all possible arguments. We occasionally restrict our attention to determiners
that are not right trivial. This is because non-right-triviality is a stronger
restriction on conservative determiners than non-left-triviality - provably, all conservative
LTRIV determiners are RTRIV. However, a conservative RTRIV determiner
is not necessarily LTRIV. For instance, the determiner D s.t.
iff A 6= ; is conservative and RTRIV but is not LTRIV.
In this article we study the monotonicity properties of non-right-trivial conservative
determiners that satisfy permutation invariance, which are also in the main
focus of the general theory of quantification in natural language.
3 Collective determiners and type shifting principles
The type shifting account of plural determiners that was initiated by Scha (1981) is
motivated by sentences such as those in (2), which involve collective predicates. In
this section we review previous treatments of collective quantification, and concentrate
on the proposal of Winter (1998,2001) that aims to solve some of the empirical
problems for Scha's and Van der Does' proposals. As an example that illustrates
many aspects of the interpretation of collective determiners, consider the following
sentence.
Exactly five students drank a whole glass of beer together.
The denotation of the collective predicate drank a whole glass of beer together
is assumed to be an element of -(E)) - a set of sets of atomic elements. To
sentence (6), the meaning of the determiner exactly five from table 1 is
shifted so that it can combine with this collective predicate. This section deals with
the proper way(s) to define such a shifting operator.
3.1 The type shifting operators of Scha and Van der Does
Scha (1981) proposes an extension of standard GQT to the treatment of collectivity
phenomena as in sentence (6). The work in Van der Does (1992,1993) contains a
systematic reformulation of Scha's approach using type shifting operators within
contemporary GQT. In the systems of Scha and Van der Does (henceforth S&D),
both distributive and collective verbal predicates (e.g. smile, meet) denote elements
of -(E)). However, nominal predicates such as students standardly denote sub-sets
of E. Accordingly, in S&D's proposal, collective determiners are functions
from -(E) \Theta -(E)) to f0; 1g. We distinguish such Atom-Set determiners, where
the first argument is a set of atoms and the second argument is a set of sets of
atoms, from the standard Atom-Atom determiners of GQT as in table 1, where both
arguments are sets of atoms. Van der Does follows Van Benthem (1991:68) and
proposes that the Atom-Set determiners that are necessary for the interpretation
of plural sentences can be obtained systematically from Atom-Atom determiners.
There are two collective shifts that are proposed for sentences like (6) in S&D's
works. 6 One collective operator is called Existential operator). For sentence
this operator generates a statement that claims that there is a set of exactly five
students and that this set drank a whole glass of beer together. In general, for any
Atom-Atom determiner D over a domain E, applying the E operator leads to the
Atom-Set determiner E(D) which is defined for all A ' E and B ' -(E) by:
Scha proposes another collective analysis of plural determiners, which Van
der Does refers to as neutral. In sentence (6), for instance, this neutral analysis
counts all the individual students who participated in sets of students that drank a
whole glass of beer together, and requires that the total number of these students is
exactly five. For any Atom-Atom determiner D over E, the corresponding Atom-
determiner N(D) (Van der Does' N 2
is defined for all A ' E and B ' -(E)
by:
Note that the set [(B " -(A)) contains x if and only if x is an element of a subset
of A that belongs to B.
The N operator involves a mapping of the left argument A of the determiner,
which is a set of atoms, into the power set of A, which is a set of sets of atoms.
Such a mapping from sets to sets of sets is useful in S&D's strategy, as in most
other theories of plurals, since it makes a connection between distributive predicates
and collective predicates. This is required whenever a predicate over atoms
semantically interacts with other elements that range over collections (e.g. sets).
In the theory of plurals, such a mapping is often referred to as a distributivity op-
erator. The power set operator - is sufficient as a distributivity operator for our
6 S&D also assume a distributive shift, which is irrelevant for our purposes here. In addition, Van
der Does (1992) considers a third collective shift but (inconclusively) dismisses it in his 1993 article.
purposes in this paper. 7 We say that a set A ' -(E) is a distributed set of atoms
set of atoms A ' E.
In sentences where the second argument of the determiner is an ordinary distributive
predicate, its meaning under S&D's treatment can be defined as a distributed
set of atoms rather than a set of atoms. This is needed in order to match
the type of the second argument of the lifted Atom-Set determiner. For instance, the
standard meaning of sentence (7) below is captured using the N operator as in (8),
and not simply by directly applying the Atom-Atom denotation of the determiner
exactly five to two sets of atoms, as in standard GQT.
Exactly five students sang.
(N(exactly 5 0 ))(student 0 )(-(sing 0 )),
where student sing 0
It is easy to verify that this analysis is equivalent to the standard analysis of (7).
More generally, observe the following fact.
Fact 1 For every conservative Atom-Atom determiner D over E, for all
3.2 Problems for S&D's strategies
One empirical problem for S&D's type shifting analysis follows from a warning
in Van Benthem (1986:52-53), and is accordingly referred to as the Van Benthem
problem for plural quantification. Van Benthem mentions that any general existential
lifting such as the E operator is problematic, because it turns any Atom-Atom
determiner into an Atom-Set determiner that is upward right-monotone.
Quite expectedly, this property of the E operator is empirically problematic
with many Atom-Atom determiners that are not upward right monotone. For in-
stance, using the E operator, sentence (9) below, with the MON# determiner no,
gets the interpretation in (10).
met yesterday at the coffee shop.
This analysis of sentence (9) makes the strange claim that an empty set met yesterday
at the coffee shop, which is clearly not what sentence claims. 8 The
7 A more common version of a distributivity operator is the - + operator, which maps each set
to its power set minus the empty set: f;g. Using the operator would not
change the results in this paper, and therefore we use the simpler power set operator. For arguments
in favor of a distributivity operator that is a more sophisticated than - see Schwarzschild (1996).
For counterarguments see Winter (2000).
8 Using - + instead of - in the definition of the E operator (which is what S&D do), sentence
is analyzed as a contradiction. Obviously, this is not the correct analysis of the sentence either.
existential analysis reverses the monotonicity properties of the determiner no, so
that E(no 0 ) is "MON". However, the determiner no remains #MON# in this case
even though it is used for quantification over collections. For instance, sentence
sentence (11a) below and does not entail sentence (11b).
students met yesterday evening at the coffee shop.
b. No people (ever) met at the coffee shop.
The problem is manifested even more dramatically when the second argument
of the determiner is a distributive predicate (distributed by -). For instance, sentence
(12) below is analyzed as in (13), which is a tautology (for choose
Less than five students smiled.
Although the Van Benthem problem indicates that the existential operator is
inadequate, this operator still captures one effect that the N operator by itself does
not handle. To see that, reconsider sentence (6), restated in (14) below, and its
analyses using the N and E operators.
Exactly five students drank a whole glass of beer together.
a. (N(exactly 5 0 ))(student 0 )(drink beer 0
b. (E(exactly 5 0 ))(student 0 )(drink beer 0
The analysis in (14a) requires that the total number of students in sets of students
that drank a glass of beer together is five. However, in addition, sentence (14)
also requires that there was a set of five students who drank a whole glass of beer
together. The E operator in the analysis in (14b) imposes this requirement, but
fails to take into account the total number of students who drank a glass of beer,
and therefore leads to the Van Benthem problem. A similar dilemma arises with
upward monotone determiners, as in the following example.
More than five students drank a whole glass of beer together.
In this case too, the N operator imposes a requirement only on the total number
of students involved in beer drinking events, whereas what we need in this case
is an existential reading, requiring that there actually was a set with more than
five students who drank a whole glass of beer together. In S&D's systems there
9 If we replace - by - sentence (12) is analyzed as being equivalent to at least one student
smiled, which is bad enough.
is no clear specification of how to capture both aspects of collective quantification
without generating undesired truth conditions. 10
Another problem for S&D's strategy is in the type of collective determiners it
assumes. In S&D's proposal, any collective determiner is an Atom-Set determiner.
However, in many cases, a collective predicate may also appear in the left argument
of a determiner. For instance, reconsider the following example from (2).
All the colleagues cooperated.
In this case, the plural noun colleagues is collective: to say that a and b are colleagues
is not the same as saying that a is a colleague and b is a colleague. Other
collective nouns like brothers, sisters, friends etc. lead to similar problems. Other
cases where the first argument of a determiner is collective appear when a distributive
noun is modified by a collective predicate. For instance, consider the following
examples.
Exactly four similar students smiled.
(18) Most of the students who saw each other played chess.
In these cases, the interpretation of the first argument involves intersection of a distributive
predicate (distributed by -) with a collective predicate. For instance, the
denotation of the nominal similar students in (17) is obtained by intersecting the set
of sets of students with the set of sets of similar entities. These examples indicate
that collective determiners should allow collective predicates in both arguments,
and not only in the right argument as in S&D's lifting strategies.
3.3 Dalrymple et al.'s bounded composition operator
Dalrymple et al. (1998) concentrate on the semantics of reciprocal expressions (each
other, one another) in sentences with simple plural NPs such as the children and
Mary and John. However, they also address the problem of interpreting reciprocal
expressions in the following sentences, where a collective reciprocal predicate
combines with a quantifier of more complex monotonicity properties.
a. Many people are familiar to one another.
b. Most couples in the apartment complex babysit for each other.
c. At most five men hit each other.
Dalrymple et al. observe the existential requirement in sentences (19a-b), and their
treatment of such sentences, with MON" determiners, is accordingly a reformulation
of the E operator of Scha and Van der Does. However, to overcome the
problems that the existential requirement creates in sentences such as (19c), with
tries to overcome this problem by proposing a syntactic mechanism of
feature propagation that is designed to rule out some of the undesired effects of his semantic system.
non-MON" determiners, Dalrymple et al. use a different analysis for such sen-
tences. In example (19c), their analysis requires that each set of men who hit each
other contains at most five men. It seems quite likely that for the case of sentence
(19c), this analysis reflects a possible reading. 11
Dalrymple et al. combine the two processes they assume for MON" and non-
MON" determiners into one general operator that they call Bounded Composition.
This operator can be cast as a lifting operator of determiners, so that for any Atom-Atom
determiner D over E, the corresponding Atom-Set determiner BC(D) is
defined for all A ' E and B ' -(E) by: 12
The first conjunct in this definition reflects a counting process, parallel to S&D's
N operator. 13 The second conjunct adds to this process an existential requirement,
similar to S&D's E operator. However, there are two modifications in the usage of
these two processes, compared to S&D's strategies:
1. Unlike the N operator, the counting process within BC does not require the
total union of to be in the generalized quantifier D(A), but only
requires that each set of maximal cardinality within is in D(A).
2. The existential requirement overcomes Van Benthem's problem, due to the
disjunct properly weakens the E operator with
determiners that satisfy
The motivation for the introduction of the bounded composition operator is to
treat collective readings of GQs with reciprocal predicates. However, we believe
that the combination of counting and existential processes is a promising aspect of
Dalrymple et al.'s proposal also for other cases of collectivity. On the other hand,
the empirical adequacy of the counting process as implemented within the BC operator
is not completely clear to us. Some speakers we consulted accept Dalrymple
et al.'s assumption that sentences such as (19c), with a downward monotone quan-
tifier, can be true even though the 'total' set of people who participated in sets of
Dalrymple et al.'s intuitions about the meaning of (19c) seem to be similar to those of
Schein (1993), who proposes an event semantics of plurals. For some remarks on the empirical
question concerning the generality of this analysis see our discussion below.
12 The operator that Dalrymple et al. propose is defined as a 4-ary relation between a determiner,
a set of atoms, a binary relation and the meaning of the reciprocal expression. For instance, in
sentence (19a), these are (respectively) the meanings of the expressions many, people, familiar to
and one another. For our purposes it is sufficient to consider the determiner alone, because the
compositional interpretation of reciprocal predicates such as familiar to one another is not in the
focus of this article.
13 The requirement jA n Xj - jA n Y j in this conjunct is needed only when we assume infinite
domains. Over finite domains it follows from the requirement jXj - jY j.
people who hit each other is not in the quantifier (i.e. in this case - includes more
than five members). However, these judgments did not seem to be highly robust
and they vary considerably when the determiner at most five is replaced by other
non-MON" determiners such as less than five, exactly five or between five and ten.
For example, consider the following sentence.
Exactly five students hit each other.
Assume that there was a set of exactly five students who hit each other, and that in
addition there was only one other set of students A who hit each other. In case there
are four students in A, then the BC operator renders sentence (20) true. However,
if A contains six students then the BC operator takes sentence (20) to be false. We
did not trace such a difference in our informants' intuitions about the sentence.
As a general operation for deriving collective readings of GQs, the BC operator
shows some undesired effects when the quantifier it derives interacts with a
so-called 'mixed' predicate. These predicates (unlike predicates formed with re-
ciprocals) can also be true of singleton sets, in addition to sets with two or more
elements. Consider for example the following sentence.
(21) At most five students drank a whole glass of beer (together or separately).
In a situation where there are ten students and each student drank a whole glass of
beer on her own, sentence (21) is clearly false. However, if we assume that no students
shared any glass of beer between them, the BC operator makes sentence (21)
true, because there is no relevant set of students with more than five members: all
the relevant sets are singletons. For these reasons, in the proposal below we choose
to study the counting process of the N operator. We leave for further research the
empirical study of the exact interpretation of sentences such as (19c), as well as the
formal study of the BC operator that is motivated by their interpretation.
3.4 Determiner fitting and the witness condition
To overcome the two problems of S&D's mechanism that were pointed out in sub-section
3.2, Winter (1998,2001) proposes to reformulate the N and E operators as
one operator called dfit (for determiner fitting). This operator, unlike S&D's N and
operators and Dalrymple et al.'s BC operator, maps an Atom-Atom determiner
into a Set-Set determiner, i.e. a determiner where both arguments can be collective
predicates. To define the dfit operator, let us first reformulate N as an operator
from Atom-Atom determiners to Set-Set determiners. This reformulation of the N
operator is called count, and is defined as follows.
Definition 5 (counting operator) Let D be an Atom-Atom determiner over E.
The corresponding Set-Set determiner count(D) is defined for all A; B ' -(E)
by:
By giving a symmetric Set-Set denotation to collective determiners, this definition
involves two separate sub-processes within the process of counting members of
collections. The first sub-process is the intersection of the right argument with the
left argument of the determiner. The second sub-process is the union of the sets in
each of the two arguments. The intersection sub-process reflects the conservativity
of (distributive/collective) quantification in natural language: the elements of the
right argument that need to be considered are only those that also appear in the
left argument. This also holds for the Set-Set determiner count(D). 14 The union
sub-process is simply a natural 'participation' adjustment of the type of the Atom-Atom
determiner's arguments: for any collective predicate A, an atom x is in [A
iff x participates in a set in A.
The count operator generalizes S&D's N operator in the following sense.
Fact 2 For every conservative Atom-Atom determiner D over E, for all A ' E
Thus, like the N operator, count respects the semantics of conservative determiners
on distributive predicates (cf. fact 1).
Corollary 3 For every conservative Atom-Atom determiner D over E, for all
As with Dalrymple et al.'s BC operator, a counting process (of the count op-
erator) is combined with an existential requirement. In order to do that, a useful
notion is the notion of witness set from Barwise and Cooper (1981).
Definition 6 (witness set) Let D be an Atom-Atom determiner over E and let A
and W be subsets of E. We say that W is a witness set of D and A iff W ' A
and D(A)(W
For example, the only witness set of the determiner every 0 and the set man 0 is
the set man 0 itself. A witness of some 0 and man 0 is any non-empty subset of
man 0 . We sometimes sloppily refer to a witness set of a determiner D and a set A
as 'witnessing the quantifier D(A).' 15
To the count operator we now add an 'existential' condition that is formalized
using a witness operator.
14 Note that we still assume that the Atom-Atom determiner D that is lifted by the count operator
is conservative. However, even when D is conservative, lifting it by an alternative operator
count does away with the intersection process within count, would not
guarantee sound conservativity equivalences such as between (i) and (ii) below.
(i) All the students are similar.
(ii) All the students are similar students.
Using the count 0 operator, sentence (i) would be treated, contrary to intuition, as being true if every
student is similar to something else (potentially a non-student). But sentence (ii) would be treated by
the count 0 operator as being false in such a situation.
Barwise and Cooper define witness sets on quantifiers explicitly, but they reach the argument A
indirectly by defining what they call a live on set of the quantifier. This complication is unnecessary
for our purposes.
Definition 7 (witness operator) Let D be an Atom-Atom determiner over E. The
corresponding Set-Set determiner wit(D) is defined for all A; B ' -(E) by:
In words: the witness operator maps an Atom-Atom determiner D to a Set-Set
determiner that holds of any two sets of sets A; B iff their intersection A " B is
empty or contains a witness set of D and [A.
A similar strategy for quantification over witness sets is proposed in
(1997). While Szabolcsi's witness operation is used only for MON''
determiners, the witness operator that is defined above is designed to be used for
all determiners. This is the reason for the disjunction in the definition of the wit
operator with an emptiness requirement on A " B. As we shall see below, this will
allow us to apply the witness operator as a general strategy, also in cases like (9),
without imposing undesired existential requirements as in (10). The general determiner
fitting operator that we use is simply a conjunction of the counting operator
and the witness operator.
Definition 8 (determiner fitting operator) Let D be an Atom-Atom determiner
over E. The corresponding Set-Set determiner dfit(D) is defined for all
To exemplify the operation of the dfit operator, consider the analysis in (23) below
of sentence (14), repeated as (22). In this analysis, the noun students is treated as
the distributed set of atoms -(student 0 ). This is needed in order to match the
general type of the left argument of the Set-Set determiner that is derived by the
dfit operator.
Exactly 5 students drank a whole glass of beer together.
dfit(exactly 5 0 )(-(student 0 ))(drink beer 0 )
The first conjunct in this formula is derived by the count operation, and guarantees
that exactly five students participated in sets of students drinking beer. The second
conjunct is a result of the witness condition, and it verifies that there exists at least
one such set that is constituted by exactly five students.
By combining the counting process and the existential process in this way, the
dfit operator captures some properties of collective quantification that seem quite
puzzling under S&D's double lifting strategy. On the one hand, as
problem indicates, in sentences such as (9) and (12), with MON# determiners, the
existential strategy is problematic and only the N operator is needed. In sentences
such as (15), with MON" determiners, the existential strategy is needed and the N
operator is redundant. Moreover, when the determiner is MON-, as in (14), the
existential analysis is needed in combination with the neutral analysis. The dfit
operator distinguishes correctly between these cases. As we will presently see, in
those cases where a simple existential analysis would be problematic, the witness
condition is trivially met due to the counting condition within dfit. We characterize
two such cases: cases such as (12), where the two arguments of the determiner
are distributed sets of atoms, and cases such as (9), where the determiner is downward
right-monotone. In other cases, the witness condition does add a non-trivial
requirement to the counting operator.
First, let us observe that when the arguments of a Set-Set determiner dfit(D)
are two distributed sets of atoms -(A) and -(B), the witness operator adds nothing
to the requirement that
Fact 4 Let D be a conservative Atom-Atom determiner over E. Then for all
Proof. Assume that 1. By conservativity, the witness set
the existential requirement in wit.
From this fact and corollary 3 it directly follows that the witness operator is
redundant in dfit when the arguments of the determiner are both distributed sets of
atoms.
Corollary 5 Let D be a conservative Atom-Atom determiner over E. Then for all
Similarly, when a determiner is downward monotone in its right argument, the
witness operator is again redundant in the definition of dfit :
Fact 6 Let D be a MON# Atom-Atom determiner over E. Then for all
In other cases - that is, when A or B are not DSAs and D is not MON" -
(count(D))(A)(B) does not entail (wit(D))(A)(B). Accordingly, an existential
requirement is invoked by the sentence. This is illustrated by the entailments from
the sentences in (24) to sentence (25):
a. More than/exactly five students drank a whole glass of beer together.
b. More than five/exactly five students who drank a whole glass of beer
together smiled.
c. More than/exactly five students who drank a whole glass of beer together
hit each other later.
There was (at least) one group of more than/exactly five students who
drank a whole glass of beer together.
These entailments are not captured by the count operator alone, but they follow
from the wit requirement within the dfit operator.
3.5 Determiners that are trivial for plurals
Before moving on to the next section and to the monotonicity properties of Set-Set
determiners under the dfit operator, there is an additional notion that we need to
introduce, which refines the right triviality property for plural determiners. An illustration
of the point is the behavior of the definite article. In singular sentences
such as (26) below, the 'Russellian' interpretation of the definite article in GQT
analyses it as a universal determiner that in addition imposes a uniqueness condition
on its left argument. The definition of this determiner is given in (27).
(26) The student smiled.
(27) the 0
By contrast, in a plural sentence such as (28) below, this definition would be inad-
equate. The sentence here imposes a plurality requirement on the left argument of
the determiner, rather than uniqueness. A possible definition of this determiner is
given in (29).
(28) The students smiled.
(29) the 0
getting into the question of definites, us make one simple point. We
do not expect the meaning in (27), which is appropriate for the singular definite
article, to be a meaning of any plural determiner. The reason is that such a determiner
function, which imposes singularity on its left argument, would contradict
the implication, prominent with plurals, that there are at least two elements in the
left argument. 17 Consequently, no plural noun could appear with such a determiner
without leading to a trivial statement: a contradiction or a tautology. As
with singular determiners, we do not expect such trivialities with plural determin-
ers. Crucially, note that the determiner in (27) is not RTRIV (or LTRIV). We can
therefore assume that plural determiners show a stronger notion of non-triviality
than RTRIV, which we call triviality for plurals (PTRIV). Formally:
Definition 9 (triviality for plurals) A determiner D over E is called trivial for
plurals (PTRIV) iff for all
Most contemporary works assume that the definite article does not denote a determiner, but
some version of the iota operator. For works that propose a unified definition of the definite article
for singular and plural NPs, see Sharvy (1980) and Link (1983). We believe that this view on articles
is justified (cf. Winter (2001)), and that (in)definite articles such as a and the should not denote
determiners in GQT. Therefore, the dfit operator does not apply to these articles. For one thing,
it does not seem possible to derive the collective reading of the definite article from its singular
denotation in (27) using the same operator that applies to other determiners.
17 Whether this implication is truth-conditional or presuppositional is irrelevant here. See some
discussion of this point in Krifka (1992), Schwarzschild (1996), Chierchia (1998) and Winter
(1998,
Informally, a PTRIV determiner is indifferent to the identity of its right argument
whenever its left argument is a set with two or more entities. The determiner the 0
sg
as defined above is PTRIV but not RTRIV. We hypothesize that all plural determiner
expressions in natural language (though not necessarily all singular determiner
expressions) denote non-PTRIV determiners. This hypothesis about plural
determiners will play a role in the next section.
4 Monotonicity properties of collective determiners
The facts that were mentioned in the introduction indicate that standard monotonicity
properties of determiners are not always preserved when they apply to collective
predicates. In this section we show that, using the count operator, monotonicity
properties in the right argument of a determiner are always preserved, in agreement
with intuition. However, whether or not a determiner preserves its left monotonicity
property when it applies to collective predicates, depends on its monotonicity
property in the right argument. We observe that the reason for these different results
for determiners of different monotonicity properties is the asymmetric conservativity
element within the definition of the count operator, which intersects the
right argument with the left argument, but not vice versa. Further, we mention
without proof that the same results concerning (non-)preservation of monotonicity
properties hold for the dfit operator, which is defined using count.
The following fact summarizes all the cases where (non-)monotonicity is preserved
under count.
Fact 7 Let D be a determiner over E. If D belongs to one of the classes "MON",
#MON#, MON" or MON#, then the Set-Set determiner count(D) belongs to the
same class. If D is conservative and -MON (MON-), then count(D) is also
-MON (MON-).
Proof sketch. For the monotone cases the claim immediately follows from the
definition of count.
For the non-monotone cases it is enough to note that if D is -MON, then
there are A 0' A 1 ' E, A 2 ' A 0' E and B; C
tive, we can apply corollary 3 (page 13) and get:
0, which shows that count(D) is -MON too. The proof is similar when D is
MON-.
Fact 7 explains why in many cases, as in the following examples, determiners
do not show any surprising monotonicity patterns when they appear with collective
predicates.
(30) Some (rich) students drank a whole glass of (dark) beer together ) Some
students drank a whole glass of beer together. (some is "MON")
whole glass of beer together students drank
a whole glass of (dark) beer together. (no is #MON#)
(32) All/most of the students drank a whole glass of dark beer together ) All/most
of the students drank a whole glass of beer together.
(all and most are MON")
Exactly five students drank a whole glass of beer together
Exactly five rich students drank a whole glass of (dark) beer together.
(exactly five is -MON-)
However, as we saw in (4) above, determiners are sometimes more surprising in
their monotonicity behavior with collective predicates. To cover all the monotonicity
classes of determiners, we also have to consider the left argument of determiners
such as all, not all and some but not all, which are monotone in their left argument
but have a different monotonicity property in their right argument. The theorem below
will show that almost all these determiners lose their left monotonicity under
count. The only exceptions are PTRIV determiners that are #MON" or "MON#.
Provably, these determiners preserve left monotonicity under count. However, as
claimed above, PTRIV determiners are not expected in the class of plural determiners
in natural language. Moreover, it is not hard to show that in each of the
monotonicity classes #MON" and "MON# there is only one PTRIV determiner
that is conservative, PI and not RTRIV. These two determiners are the following
determiners - D 0 and its complement
Once these two special cases are observed, we can establish the following result.
Theorem Let D be a conservative determiner over E that is not RTRIV. If D satisfies
one of the conditions (i) and (ii) below, then the Set-Set determiner count(D)
is -MON.
(i) D is non-PTRIV and is either #MON" or "MON#.
(ii) D is PI and is either #MON- or "MON-.
To make the proof of this theorem more readable, we first prove the following
lemma.
Lemma 8 Let D be a conservative determiner over E. If D satisfies one of the
conditions (i) and (ii) below then there are X; Y; Y
(i) D is non-PTRIV and #MON".
(ii) D is PI and #MON-.
Proof. Assume first that D is non-PTRIV and #MON". Because D is non-
PTRIV and conservative, there are
and it follows from
Therefore, we can choose
If, on the other hand, D is PI and #MON-, then (by #MON- and conserva-
tivity) there are B s.t. the following
hold:
(D is not MON#);
(D is not MON").
It follows that both A and A 0 are not empty. If jAj ? 1 then simply choose
Let - be a permutation on E s.t.
follows from ( ) that
;. Otherwise, A fyg, and from ( ),
it follows from
Therefore, we can choose
Proof of theorem. We first prove the theorem for a determiner D that is either
#MON" or #MON-. Because D is not RTRIV, it cannot be "MON. Using the
same reasoning as in the proof of the non-monotone cases in fact 7, it is straight-forward
to show that in each of the two cases count(D) is not "MON. It is left to
be shown that count(D) is also not #MON. By lemma 8, there are X; Y; Y
s.t. and the following holds:
three subsets of -(E), A, A 0 and B as follows:
g.
it follows from ( ) that
Hence, count(D) is not #MON.
Assume now that D is either "MON# or "MON-. Again, it is straightforward
to show that count(D) is not #MON. To see that count(D) is also not "MON,
consider the negation of D: :D, defined by
The determiner :D is either "MON# or "MON-, respectively. Further, :D is
non-PTRIV if and only if D is non-PTRIV, and the same holds for the properties
CONS, PI and non-RTRIV. Thus, it follows from condition ( ) above that for the
same
Using the same A, A 0 and B as above we get that count(D) is not "MON.
In the proof of the theorem we use one construction of subsets of -(E) - A,
A 0 and B - that applies to all four classes of determiners. However, it is not always
convenient to apply this construction to sentences in natural language, because the
set A 0 is neither DSA nor a purely collective predicate (i.e. it contains singletons
in it). To overcome this empirical difficulty, assume that condition ( ) is satisfied
for some determiner D that satisfies the conditions in the theorem.
This assumption is tenable, with no loss of generality, for all determiners that are
#MON". Under this assumption, we can use the following construction. Leave A
and B as they are in the proof, and use the following A 00 instead of A
g.
Now consider the following two sentences:
a. All the students drank a whole glass of beer together.
b. All the students who've been roommates drank a whole glass of beer
together.
Assume that and that Y g. Clearly, these sets
satisfy condition ( ) with respect to the determiner all. Following the construction
Assume now that the denotation of students is A, and that the denotation of students
who've been roommates is A 00 , i.e. all the couples of students. Assume further
that the denotation of drank a whole glass of beer together is B. In this situation
it is clear that sentence (34a) is true. However, since, for instance, s 1
and s 3
were
roommates, but did not drink a whole glass of beer together, sentence (34b) is not
true in this situation.
An example for a "MON- determiner is the determiner some but not all. This
determiner is formally defined as follows, for all A; B ' E:
some but not all 0
Consider the following two sentences:
a. Some but not all of the students who've been roommates drank a whole
glass of beer together.
b. Some but not all of the students drank a whole glass of beer together.
Clearly, the same X , Y and Y 0 from the previous example satisfy condition ( )
with respect to the determiner some but not all. Following the same line, assume
that the denotations of students, students who've been roommates and drank
a whole glass of beer together are the same as in the previous example. Now,
sentence (35a) is true in this situation, since there is a set of students that were
roommates and also drank a whole glass of beer together, namely fs 1
but it is
not true that all the students who've been roommates drank a whole glass of beer
Monotonicity of D Monotonicity of count(D) Example
"MON" "MON" some
#MON#MON# less than five
#MON" -MON" (!) all
"MON# -MON# (!) not all
-MON-MON- exactly five
-MON# -MON# not all and (in fact) less than five (of the)
-MON" -MON" most
#MON-MON- (!) all or less than five (of the)
"MON-MON- (!) some but not all (of the)
Table
2: (non-)monotonicity under count
together (cf. the argument in the previous example). On the other hand, sentence
(35b) is not true in this situation, simply because all the students drank a whole
glass of beer together.
Table
2 summarizes the (non-)preservation of monotonicity properties under
count for the nine classes of determiners according to the monotonicity of their
two arguments. Note again that for each of the two classes #MON" and "MON#,
there is an exception to the result that is mentioned in the table: the PTRIV deter-
miners. The exclamation marks emphasize the cases in which left monotonicity is
not preserved.
There are two natural extensions to these results, which we state here without
proof. First, we note that fact 7 and the 'monotonicity loss' theorem equally hold
when count is replaced by dfit. Thus, adding the witness condition to count does
not change the monotonicity (non-)preservation results that were established above
for count. The proof of this claim is quite laborious, but routine. Linguistically,
it implies that existential processes in collective quantification should not lead to
problems that are similar to Van Benthem's problem, or to any other change in the
monotonicity properties of determiners beyond what was shown above.
Another point that worths mentioning is that the results that were shown above
equally hold for global determiners: functors from domains E to determiners DE
over E. It is important to appeal to global functors because linguistic items such
as all, some, five etc. have global definitions and properties, and are not simply
defined over a given domain, as assumed throughout this article. We say that a
global determiner D is conservative, permutation invariant, (left/right) trivial or
(upward/downward left/right) monotone, if the local determiner DE satisfies the
respective property for any non-empty domain E. Using this global perspective,
all the results that were proven above for local determiners equally hold of global
determiners that satisfy the extension property (cf. Van Benthem (1984)). The
When we say that a global determiner D satisfies extension, this means that given a domain E
and any two sets A;B ' E, the local determiners DE 0 s.t. all agree on the truth value that
they assign to A and B.
reason for assuming this global property is our results concerning non-monotone
determiners. Note that a global determiner D is -MON (MON-) if and only if
there is E s.t. DE is not #MON (MON#) and there is
is not "MON
(MON"). From this it does not yet automatically follow that there is a domain E
s.t. DE is -MON (MON-), as required in fact 7 and the main theorem. How-
ever, provided that a global determiner D is non-trivial in both its arguments and
satisfies extension, the existence of such a domain does follow from the assumption
that D is -MON (MON-). Since most works on GQT assume that natural
language determiners satisfy extension (see Keenan and Westerst-ahl (1996)), the
generalization of our results to global determiners is both linguistically and technically
straightforward.
5 Conclusion
The formal study of the interactions between quantifiers and collective predicates
has to deal with many seemingly conflicting pieces of evidence that threaten to blur
the interesting logical questions that these phenomena raise. In this article we have
studied the monotonicity properties of collective quantification, which is a central
aspect of the problem of collectivity. We showed that to a large extent, the principles
that underly monotonicity of collective quantification follow from standard
assumptions on quantification in natural language in general. The count opera-
tor, which is a straightforward extension of Scha's `neutral' analysis of collective
determiners, involves a simple 'conservativity element' - intersection of the right
argument with the left argument, and a 'participation element' - union of both set
of sets arguments. The conservativity element within the count operator is responsible
for the two a priori unexpected asymmetries in the monotonicity behavior of
collective determiners:
1. Only determiners with 'mixed' monotonicity properties change their behavior
when they quantify over collections.
2. Only the left monotonicity properties of such determiners may change in
these cases.
We believe that the reduction of certain asymmetries in the domain of collective
quantification to the asymmetric conservativity principle is a desirable result that
another aspect of the central role that this principle plays in natural language
semantics.
Two open questions should be mentioned. First, in this article we did not address
the 'universal' reading that certain collective determiners show, as treated
by Dalrymple et al.'s bounded composition operator. More empirical research is
needed into these phenomena, which indicate that there may be more than one
strategy of plural quantification. The formal properties of such universal strategies
and the linguistic restrictions on their application should be further explored. Sec-
ond, although we characterized the logical monotonicity properties of collective
determiners, we did not study the linguistic implications that these properties may
have for the analysis of negative polarity items. These items normally appear only
in downward entailing environments, and it should be checked whether they are
sensitive to 'monotonicity loss' under collectivity of all. For instance, a sentence
such as the following, where all is not left downward monotone, is expected not to
license the negative polarity item any in its left argument.
(36) ?All the students who had any time drank a whole glass of beer together.
Whether or not this expectation is borne out is not clear to us, and we must leave
these and other implications of 'monotonicity loss' to further research.
Acknowledgments
This research was partly supported by grant no. 1999210 ("Extensions and Implementations of Natural
Logic") from the United States-Israel Binational Science Foundation (BSF), Jerusalem, Israel.
The second author was also partly supported by NWO grants for visiting the UiL OTS of the Utrecht
University in the summers of 2000 and 2001. We are indebted to Johan van Benthem for discussions
that initiated our interest in questions of monotonicity and collectivity. Thanks also to Nissim
Francez, Richard Oehrle, Mori Rimon and Shuly Wintner for their remarks. The main results in
this article were presented at the conference Formal Grammar/Mathematics of Language(FGMOL),
Helsinki, August 2001. We are grateful to the editor of JoLLI and an anonymous reviewer for thorough
and constructive comments that helped us to improve an earlier version of this work.
--R
Generalized quantifiers and natural language.
Some Extensions of a Montague Fragment of English.
Plurality of mass nouns and the notion of 'semantic parameter'.
Reciprocal expressions and the concept of reciprocity.
Implication reversal in a natural language.
Quantification in an Extended Montague Grammar.
A semantic characterization of natural language determiners.
Generalized quantifiers in linguistics and logic.
quantifiers.
Polarity Sensitivity as Inherent Scope Relations.
The logical analysis of plurals and mass terms: a lattice theoretical approach.
In Bauerle
Plurals and Events.
A more general theory of definite descriptions.
Strategies for scope taking.
Questions about quantifiers.
Essays in Logical Semantics.
Language in Action: categories
Applied Quantifier Logics: collectives
Sums and quantifiers.
Flexible Boolean Semantics: coordination
Distributivity and dependency.
Flexibility Principles in Boolean Semantics: coordination
--TR | monotonicity;determiner;plural;collectivity;generalized quantifier;type shifting |
634756 | Removing Node Overlapping in Graph Layout Using Constrained Optimization. | Although graph drawing has been extensively studied, little attention has been paid to the problem of node overlapping. The problem arises because almost all existing graph layout algorithms assume that nodes are points. In practice, however, nodes may be labelled, and these labels may overlap. Here we investigate how such node overlapping can be removed in a subsequent layout adjustment phase. We propose four different approaches for removing node overlapping, all of which are based on constrained optimization techniques. The first is the simplest. It performs the minimal linear scaling which will remove node-overlapping. The second approach relies on formulating the node overlapping problem as a convex quadratic programming problem, which can then be solved by any quadratic solver. The disadvantage is that, since constraints must be linear, the node overlapping constraints cannot be expressed directly, but must be strengthened to obtain a linear constraint strong enough to ensure no node overlapping. The third and fourth approaches are based on local search methods. The third is an adaptation of the EGENET solver originally designed for solving general constraint satisfaction problems, while the fourth approach is a form of Lagrangian multiplier method, a well-known optimization technique used in operations research. Both the third and fourth method are able to handle the node overlapping constraints directly, and thus may potentially find better solutions. Their disadvantage is that no efficient global optimization methods are available for such problems, and hence we must accept a local minimum. We illustrate all of the above methods on a series of layout adjustment problems. | Introduction
Graph drawing has been extensively studied over the last fteen years. However,
almost all research has dealt with graph layout in which the nodes are treated as
points in the layout of graphs. Unfortunately, treating nodes as points is inadequate
for many applications. For example, a textual label is frequently added to
each node to explain some important information, as in many illustrative diagrams
for engineering designs, the human body and the geography, and even a satisfactory
layout for a point-based graph may lead to node overlapping when labels are
considered. For this reason, we are interested in layout adjustment which takes an
initial graph layout with the sizes of the nodes as inputs, and then modies the
original graph layout so that node overlapping is removed.
We primarily address the problem of layout adjustment in a dynamic context.
This is useful in interactive applications such as graph display in which sub-graphs
are enlarged and shrunk or node labels are changed. Here the aim is take an existing
graph layout and remove node overlapping while preserving the user's mental map
of the graph. Following, Misue et al [17] in order to minimize change to the mental
map we preserve the graph's orthogonal ordering, that is to say, the relative ordering
of the nodes in both the x and y direction, and attempt to place nodes as closely
as possible to their original positions. We also consider layout adjustment in the
context of a static graph. In this case we wish to minimize the area of the graph
in the new layout. However, since we assume that some sophisticated graph layout
algorithm has been used to give an initial layout for the graph, like the dynamic
case we still try to preserve the initial layout while removing node overlapping.
In this paper we study four dierent proposals for performing graph layout adjust-
ment. All approaches model the problem as a constrained optimization problem.
An advantage of viewing layout adjustment as a constrained optimization problem
is that we can also add constraints which capture the semantics of the diagram if it
is not just a simple graph. For example, we can add constraints which specify that
certain nodes must be on the boundary of the graph or which specify the relative
placement of nodes.
Our rst approach to layout adjustment is the simplest. It performs the minimum
linear scaling in both the x and y directions which will remove node-overlapping.
We give a simple and e-cient algorithm based on dynamic programming for nding
this minimum. The disadvantage is that uniform scaling means that nodes are often
moved unnecessarily far, and so this approach may not lead to good layout.
The second approach is based on formulating the node overlapping problem as a
convex quadratic programming problem which can then be solved using a quadratic
solver. This approach is attractive because algorithms for convex quadratic optimization
are well understood, and global optimization is possible in polynomial-
time. The main disadvantage arises from the need to model node overlapping
constraints using linear constraints. Unfortunately, the no node-label overlap constraint
between nodes u and v is inherently disjunctive, i.e. u is su-ciently to the
left of v or u is su-ciently above v or u is su-ciently to the right of v or u is
su-ciently below v. This cannot be expressed directly but must be strengthened
to obtain a linear constraint strong enough to ensure no node overlapping. Optimal
layout with respect to this stronger linear constraint may be sup-optimal with
respect to the original no node-label overlap constraints.
In contrast the third and fourth approaches allow the disjunctive node overlapping
constraint to be expressed directly. Both approaches use local search methods [1,
2, 7, 6, 27]. A local search method starts with a current value for each variable, and
by examining the local neighbourhood tries to move to a point which is closer to the
optimum. Constraints are handled as penalties to the optimization function. The
search proceeds until a local minimum is found. When the local minimum represents
a solution (all constraints are satised), the algorithm terminates. Otherwise, it
may penalize this local minimum to avoid visiting it again, and then continue the
search until it nds a solution. Because for layout adjustment problems we have a
starting position \close" to the global optimum, namely the initial layout, in many
cases local search can e-ciently nd the optimum.
The third approach is a modication of the EGENET [11] solver (an extension
of the GENET model [27]) to perform layout adjustment. Modications are required
to (a) handle a constraint optimization problem (as opposed to constraint
satisfaction for which EGENET is designed) and (b) to handle real (
oating point)
variables, rather than variables with a discrete domain. Our work represents the
rst attempt, that we know of, to investigate how the EGENET solver can be used
to solve constrained optimization problems involving real numbers 1 .
Our fourth approach is an adaptation of the Lagrangian multiplier methods
(LMMs) [13, 20, 26], a classical optimization technique used in operations research.
Lagrangian multiplier methods are a general approach to constrained optimization
which have been applied to many di-cult problems successfully. Here we develop a
specic Lagrangian algorithm where variables are treated in pairs corresponding to
node positions. Because of the non-convexity of the constraints, we repeatedly optimize
using the results of the previous optimization to give a new, possibly better,
starting position for the search.
There has been little work on layout adjustment and mental map preservation.
The most closely related papers are that of Eades et al [9] in which mental map
was proposed for the rst time and orthogonal ordering, proximity relations and
topology were given as three criteria, and that of Misue et al [17], which provides a
force-directed algorithm called the force scan algorithm to perform dynamic graph
layout adjustment. We demonstrate that our methods, apart from the uniform
scaling approach, give better layout than the force scan algorithm, although they
are slower.
Other related work includes Lyons [15] who tries to improve the distribution of the
nodes in the new layout according to some measures of distribution (cluster busting)
while simultaneously trying to minimize the dierence between the two layouts according
to some measures of dierence (anchored graph drawing). Eades et al [8] try
to preserve the mental map in animated graph drawing by using a modied spring
algorithm to provide smooth user transitions. Other related work includes: [19],
[4], [14], [18] and [21]. A preliminary version of our second proposal, including a
proof, appeared in [10]. For a comprehensive survey of graph drawing methods,
see [3, 22].
Finally we mention some previous work [24, 25] on adapting the GENET model
to solve constraint optimization problems [23]. Guided Local Search (GLS) [25]
iteratively calls a local search procedure, based on the GENET variable updating
scheme, to modify and minimize an augmented cost function until a predened
stopping condition is reached. GLS had been successful in solving Radio Link
Frequency Assignment Problems.
This paper is organized as follows. In Section 2 we describe how to view the lay-out
adjustment problem as a constrained optimization problem and introduce both
a static and dynamic version of the problem. In Section 3 we describe the simplest
approach to solving the layout adjustment problem, linear scaling. In Section 4 we
show how to replace the disjunctive no-node overlapping constraints of the layout
problem with linear approximations. This allows us to transform the layout adjustment
problem to a quadratic programming problem, which can then be handled
by any quadratic solver. In Section 5 we introduce the original EGENET model.
Then, we discuss how to modify the EGENET model to handle both the static
and dynamic layout problems as continuous constrained optimization problems. In
Section 6 we brie
y describe some basic concepts in multiplier methods, and then
discuss how to adapt these concepts to derive a Lagrangian-based search procedure
specialized for layout adjustment problems. We provide an empirical evaluation of
our four proposals using a set of arbitrary graph layout problems investigating both
their e-ciency and eectiveness in Section 7. We also compare our approaches to
the force-scan algorithm of Misue et al. Finally, we summarize and conclude our
work in Section 8.
2. Layout Adjustment as Constrained Optimization
All our algorithms for layout adjustment are based on translating the problem into
a constrained quadratic optimization problem of the form
minimize with respect to C
where is a quadratic expression and C is a (possibly non-linear) collection of
constraints. All are in terms of variables representing the position of each node.
We assume that we are given
A (possibly directed) graph ng is the set of nodes
in the graph, and E V V is the set of edges in the graph, that is to say,
there is a edge from node u to node v.
A node labelling for G which consists of two vectors wn ) and
each node v to the width (w v ) and height (h v ) of
its label respectively.
And an initial layout for the graph G, which consists of two vectors, x
n ) which map each node v to its x or y position
respectively. That is to say, node v is placed at
Intuitively, as shown in Figure 1, each node has a rectangular bounding box and
the display area has origin at its lower left corner with x axis rightwards and y
upwards.
The variables of the layout adjustment problem are the x and y coordinates of the
nodes, which we shall represent using two vectors x and y. The constraints on the
problem for layout adjustment must ensure that there is no node overlapping in the
resulting solution. These constraints (C no ) can be expressed as: for all u;
Y
Figure
1. Notation
(v to the right of u)
to the right of v)
or equivalently
In fact, we usually want the nodes to be separated by some minimum distance d
and not directly abutt each other. This is simply handled by modifying the height
and width of each node by adding the distance d, and treating the problem as no
overlap of these new larger nodes.
As indicated in the introduction, we are primarily concerned with layout adjustment
in the context of dynamic graph layout. This is useful in interactive
applications such as graph display in which sub-graphs are enlarged and shrunk or
node labels are changed. Here the aim is take an existing graph layout and remove
node overlapping while preserving the user's mental map of the graph.
One heuristic to ensure that the aesthetic criteria remain satised and that the
new layout is \similar" to the initial layout is to preserve the orthogonal ordering
of the original layout [17]. The idea is to preserve the relative ordering of the nodes
in the x and y directions. In our context, linear constraints (C oo ) which ensure the
preservation of orthogonal ordering are: for each u;
v then
x u < x v
and if x 0
v then
x
and similarly for the y-direction. We shall use this heuristic for our rst two
approaches|uniform scaling and quadratic programming|but not for the local
search based methods.
The dynamic layout adjustment problem is to nd a new layout for G, x and y,
so that the constraints C are satised and the position of each node is as close as
possible to its original position. We encode the dynamic layout adjustment problem
as a constrained optimization problem by setting the objective function dyn to be,
and we wish to
minimize dyn subject to C (1)
where we can choose the constraints of the problem C to be either C no or C no ^C oo .
The new layout is given by an optimal solution to the above problem.
Apart from layout adjustment in a dynamic context, we might also consider layout
adjustment in a static context. In this case the underlying assumption is that the
initial layout has been generated using some sophisticated graph layout algorithm
which captures aesthetic criteria. We do not want to redo this work in the layout
adjustment phase. Hence, static layout adjustment should remove node-label overlapping
but still preserve the (presumably) aesthetically pleasing node placement of
the initial layout. We are also interested in minimizing the overall area of the graph.
Preserving the initial layout is very similar to preserving the user's mental model,
and so we can use the same techniques: constraints to preserve orthogonal ordering
and a term in the objective function to move nodes as little as possible. The
main dierence to the dynamic layout adjustment problem is an extra term in the
objective function, stat , which allows the area of the new layout to be minimized.
More precisely, the static layout adjustment problem is to nd a new layout for
G, x and y, by solving the constrained optimization problem
subject to C (2)
where C is either C no or C no ^ C oo , k 0 is a weighting factor and
and towards which the layout is supposed to be shrunk.
can be the arithmetical average of all nodes' x (y) coordinates, or median of
all nodes' x (y) coordinates. One can also take the position of centroid of a graph
or even a predened point as c. Minimization will attempt to place the nodes as
close as possible to a predened position, so reducing the overall area of the new
layout.
Despite its dierent motivation, the formulation of the static layout adjustment
problem is very similar to that for the static. Thus, in the following sections, we
will focus on techniques for solving the dynamic case since they can also be used
with little modication to solve the static case.
3. Using Uniform Scaling
We rst consider a very simple approach to layout adjustment in which we uniformly
scale the graph to remove overlapping. We nd c x , m x , c y and m y and move each
node
v ) to (m x x 0
this is a linear transformation,
it preserves the graph's orthogonal ordering (indeed, all of the original graph's
structure) and the scaling factors m x and m y are chosen so that node overlapping
is removed. The disadvantage is that uniform scaling may cause nodes to move
unnecessarily.
We rst examine how to compute c x and c y for given scaling factors m x and m y .
It follows from Equation 1 that we must minimize scale where this is
This is minimized when
where x and y are the means of the x 0
v 's and y 0
's, respectively. It follows that
x and 2
y are the variances of the x 0
v 's and y 0
's, respectively.
Now we consider how to compute the scaling factors. Consider node
dimensions (w dimensions (w
similarly for my ij . By construction mx ij is
the minimum amount of scaling in the x direction which will remove the overlap
between the two nodes and similarly mx ij is the minimum amount of scaling in the
y direction to remove this overlap.
Hence we can solve the layout adjustment problem using scaling by nding scale
factors sx and sy that solve the constrained optimization problem:
minimize scale subject to 81 i <
Inspection of Equation 4 reveals that we should choose the minimal scaling factors
removes overlapping. This means that m x will be mx ij
for some overlapping nodes i and j and similarly for m y .
This observation leads to the algorithm shown in Figure 2. It computes c x ,
which remove all overlapping (if possible) and which minimize
layout-scale
compute the mean mu x and variance 2
x of the x 0
's
compute the mean mu y and variance 2
y of the y 0
's
lexicographically into the array a[1:::m]
let a[k] have form (mx
/* compute possible pairs of scaling factors bmx[k] and bmy[k] */
endfor
/* now determine which pair has least cost */
bestcost
cost
y
if (cost < bestcost) then
bestcost := cost
best := k
endif
endfor
solution
else m x := bmx[best]
Figure
2. Scaling method for layout adjustment
scale . The only subtlety to notice is that, for each i, the scaling factors bmx[i] and
bmy[i] remove all overlapping since scaling by bmx[i] in the x-dimension removes the
overlapping corresponding to a[1]; :::; a[i] and scaling by bmy[i] in the y-dimension
removes the overlapping corresponding to a[i
The main part of the algorithm has complexity O(m log m) where m is the number
of overlapping nodes. Since the number of overlapping nodes is O(jV j 2 ), the overall
complexity is O(jV j 2 log jV j).
4. Using Quadratic Programming
Quadratic programming is used to nd the global optimum of a convex quadratic
objective function where the constraints are a conjunction of linear arithmetic equalities
9and inequalities. Quadratic programming has been widely studied in operations
research and interior point methods provide polynomial time algorithms for
solving such problems.
In Section 2 we saw how to encode the layout adjustment problem as a constrained
optimization problem in which the objective function is a convex quadratic for-
mula. Unfortunately, the constraints cannot be expressed as conjunctions of linear
arithmetic constraints since the no overlap constraints involve disjunction.
To use a quadratic programming based approach, we need to replace these disjunctive
constraints by linear approximations which will guarantee that the no
overlap constraints hold. Since each individual disjunct is a linear constraint, the
straightforward way to do this is to choose which disjunctive possibility must hold.
In order to construct the linear approximation of the no-overlap constraints C no
with respect to the x direction (C no
x ) we dene the \right horizontal neighbours" of
nodes. We then constrain each node to have no overlap with its right neighbours.
In a sense we hardwire into the constraints which direction the nodes must move
in to remove the overlapping.
For any node v, its right horizontal neighbour nodes set right(v) is the set containing
all nodes u such that in the initial position: (1) u 6= v; (2) u is to the right
of v
v ); and (3) u and v could overlap if moved only in the x direction
(either x 0
or x 0
right(v) is an immediate right horizontal neighbour node of node v if
there does not exist a node u such right(v).
Similarly we can dene the upper vertical neighbour nodes set, upper(v), and
immediate upper vertical neighbour node, u v .
It is straightforward to dene the constraints in C no
x That is for each node v 2 V ,
we compute its immediate right horizontal neighbours, and for each of these, r v
say, add the constraint
into C no
x . Similarly we dene C no
y .
Generation of the orthogonal ordering constraints C oo
x and C oo
y for the x and y
directions, respectively, is conceptually straightforward. For e-ciency it is important
to eliminate as many redundant constraints as possible. The precise algorithm
is given in [10]
We treat the layout adjustment problem as two separate optimization problems,
one for the x dimension and one for the y dimension, by breaking the optimization
function into two parts and the constraint into two parts. The constraints in the
direction C x are given by C no
x together with the C oo constraints on x variables
x ). We similarly dene C y .
For the dynamic layout adjustment problem it follows from the denition of dyn
that the optimization problems are to
subject to C x (6)
quadratic-opt
compute C x
x := minimize x subject to C x
compute C y
y := minimize y subject to C y
Figure
3. Quadratic programming approach to layout adjustment
for the x-direction, and
subject to C y (7)
for the y-direction. The new layout is given by x and y where x is the solution to
and y is the solution to (7).
The advantage in separating the problem in this way is twofold. First, it improves
e-ciency since it roughly halves the number of constraints considered in
each problem. Second, if we solve for the x-direction rst, it allows us to delay
the computation of C y to take into account the node overlapping which has been
removed by the optimization in the x-direction.
The actual layout adjustment algorithm used is given in Figure 3. First the
problem in the x dimension is solved. Then x 0 is reset to be x, the x positions
discovered by this optimization. Doing this may reduce the number of upper vertical
neighbours and so reduce the size of C y and also allow more
exibility in node
placement in the y-direction.
As we previously observed, polynomial time algorithms for solving quadratic programming
problems exist. However, for medium sized problems of several hundred
to one thousand constraints the preferred method of solution is an active set
method. We have used an incremental implementation of the active set method
provided by the C++ QOCA constraint solver [5]. The key idea behind the active
set method is to solve a sequence of constrained optimization problems O 0 , ., O t .
Each problem minimizes f with respect to a set of equality constraints, A, called
the active set. The active set consists of the original equality constraints plus those
inequality constraints that are \tight," in other words, those inequalities that are
currently required to be satised as equalities. The other inequalities are ignored
for the moment.
5. Using the EGENET Solver
In this section, we investigate how the EGENET solver can be used to solve the
layout adjustment problem. In order to use quadratic programming we needed
to linearize the no-overlap constraints and implicitly xed choices about whether
to remove the overlap in the vertical or the horizontal direction. This means that
potentially better solutions to the layout adjustment problem were never considered.
EGENET can handle the disjunctive no-overlap constraints directly and hence is
attractive as a method for solving the constrained optimization problems associated
with layout adjustment. But, because this formulation is non-convex only a local
optimum can be found.
In the remainder of this section we rst review the original EGENET model for
solving discrete constrained satisfaction problems. We then examine how to modify
the original EGENET to handle continuous constrained satisfaction problems
(without optimization). Lastly, we discuss how the modied EGENET can be used
to solve the layout adjustment problem as a continuous constrained optimization
problem. To our knowledge, this work represents the rst attempt to investigate
how the EGENET solver can be used to solve continuous constrained optimization
problems.
5.1. The Original EGENET Solver
A constraint satisfaction problem (CSP) [16] is a tuple (U; D;C), where U is a nite
set of variables, D denes a nite set D z , called the domain of z, for each z 2 U ,
and C is a nite set of constraints restricting the combination of values that the
variables can take. A solution is an assignment of values from the domains to their
respective variables so that all constraints are satised simultaneously. CSPs are
well-known to be NP-hard in general.
GENET [27] is an articial neural network, based on the min-con
ict heuristic
(MCH), for solving arbitrary CSPs with binary constraints. The MCH is to assign
a value causing the minimum number of constraint con
icts to each variable so
as to quickly nd the local minima in the search space. Lee et. al. [12] extended
GENET to EGENET with a generic representation scheme for handling both binary
and non-binary constraints. EGENET has been successfully applied to solve non-binary
CSPs such as the car-sequencing problems and cryptarithmetic problems in
an e-cient manner [12].
Constraints in EGENET are represented by functions from values of the variables
to non-negative numbers. For example the constraint z 1 z 2 can be represented
by the function
Alternatively we can represent the same constraint just using a Boolean valued
function, as
A general CSP can be formulated as a a discrete unconstrained optimization
problem as follows.
min z2D
zn is the Cartesian product of the (nite) domains for
all the n variables, m is the total number of constraints, and g i (z) denotes the
penalty function for each constraint C i in the CSP, and i represents the weight
given to the constraint. The penalty g i (z) equals 0 if the variable assignment z
returns a positive integer. The weights associated with
constraints are all initially 1, but may be modied by the EGENET procedure. In
this formulation, the goal is to minimize the output of the cost function f(z) whose
value depends on the number of unsatised constraints and the weights associated
with these constraints. For each solution z to the original CSP, f(z
satises all the constraints.
EGENET uses a simple local search rule to minimize the cost function f(z), and
a heuristic learning rule to change weights of constraints until it nds a solution
z . Initially, a complete and random variable assignment z 0 is generated. Then,
the network executes a convergence procedure as follows. Each variable is asynchronously
updated in each convergence cycle. The update simply nds the value
for each variable which gives lowest total cost without modifying any other vari-
ables. When there is no change in any value assigned to the variables, the network
is trapped in a local minima. If the local minimum does not represents a solution,
a heuristic learning rule, is used to update the weight i for any violated constraint
C i in the CSP so as to help the network escape from these local minima. The net-work
convergence procedure iterates until a solution is found or a predetermined
resource limit is exceeded.
5.2. The Modied EGENET Solver
The original EGENET solver only supports constraints over nite domains. There-
fore, to solve the layout adjustment problems with the EGENET approach, we have
to consider how to modify the original EGENET solver so as to handle problems
with continuous domains.
To handle such a continuous constrained problem, we have to decide how to
represent the real-number domain of each variable in an EGENET network. The
domain of each variable in a CSP is nite and is usually represented by a nite set
of contiguous integers. For continuous constrained problems, this is inappropriate,
since, even if we consider that real variables only take
oating point values, there
are too many possibilities. Instead we represent the range of possible values of a
variable z i simply by a lower and upper bound l i ::u i .
For problems with no natural lower and upper bounds on variables we need to
generate them. The tighter the bounds used the more e-cient the search will be
since it examines a smaller area. But giving initial bounds which are too tight may
lead to no solution being found even when one does exist.
Since the original EGENET variable updating function assumes a nite number
of possible values in the domain of each variable, to update a continuous variable
in the modied EGENET network, we need to treat the domain of each continuous
variable represented as a range l as a set with nite number of elements
bounded by the values l and u only. In other words, we sample the domain of a
continuous variable by only trying a nite number of possible values in its range,
when updating its value within the EGENET computation. The number of elements
we try in each update is called the domain sampling size. The larger the
domain sampling size the closer to the true local optimum solution we are likely to
get, but the more computation is required at each step. We use a domain sampling
size of 10 throughout our experiments, although this could be changed (even within
a computation).
In order to overcome the coarseness of the search when using a small domain
sampling size without increasing the computational overhead too greatly, whenever
a solution is found we can revise the bounds of variables inwards towards the
solution just found. This has the eect of focusing the search around the solution
just found. This approach is used in the algorithm below. This extra
exibility
makes our algorithm substantially dierent from the case where the domain sizes
of the variables remain xed throughout the computation (as used for discrete
constrained optimization and satisfaction problems).
To handle a continuous constrained optimization problem we simply add a optimization
component to the function to be minimized. The augmented cost function
is
As we shall see in the next section, this is closely related to the Lagrange multiplier
methods.
Pseudo-code for the general optimization algorithm EGENET-opt is given in Figure
4. The algorithm takes the domain sampling size DSZ, the number n of
variables in z and the cost function cost() as input, and returns a tuple consisting
of the best solution found z b and its cost best. The basic idea is similar to that of
GLS [25]. First, we initialize all the EGENET variables using initialize vars (usually
this is just random). Then, we use the var order function to produce a permutation
perm which determines the order in which the variables z will be updated. The core
of the method is the updating loop where each of the sample values for a variable
z v are tried in turn, and the value which gives minimum cost is retained as the
new value. This represents a simple local search on the best value of variable z v .
Because the cost function includes penalties for violated constraints this will drive
the search towards solutions.
This loop continues until a local minima is found (where no variable is updated).
When the local minimum represents a solution, EGENET-opt invokes the function
revise bounds to try to revise the bounds for the domains of the EGENET variables.
Then, it computes the solution cost according to the cost function, and updates the
best solution x b and best if the cost of the current solution is smaller than best. To
modify the augmented cost function, EGENET-opt invokes penalize ctr(z), which
is basically the same as the original EGENET heuristic learning mechanism, to
penalize any violated constraint with respect to the current variable assignment.
Similarly, EGENET-opt invokes penalize opt(z) to modify the form of the optimization
function that occurs in the augmented cost function. The algorithm iterates
until the function stopping criterion detects that a predened stopping criterion is
fullled, and therefore returns true.
initialize vars(z)
z b := z
best := cost(z b )
repeat
repeat
perm := var order(n)
mincost := cost(z)
minval := z v ;
if cost(z) < mincost
mincost := cost(z)
minval := z v ;
endif
endfor
z v := minval
endfor
until (no update for all v 2 z)
if (z represents a solution)
revise bounds(z)
if (cost(z) < best)
z b := z
best := cost(z b )
endif
endif
penalize ctr(z)
penalize opt(z)
until (stopping criterion())
return hz b ; besti
Figure
4. A General EGENET-based Optimization Algorithm
Clearly, the e-ciency of using EGENET-opt to solve a continuous constrained
optimization problem, and the quality 2 of the best solution found by EGENET-
opt depends largely on how we dene the augmented cost function, and how we
appropriately modify this cost function by the penalize ctr and penalize opt function.
5.3. Handling Layout Adjustment
The variables of the layout adjustment problem are x and y, the x and y coordinates
of each node. Given the
exible representation scheme for any general constraint
in the EGENET model, it is straightforward to dene a no-overlap constraint as a
disjunctive constraint in the EGENET network, which ensures that there will be
no overlapping between the labels i and j, as follows:
Figure
5 shows how the constraint is represented in the modied EGENET net-
work. It denes a function g ij which measures the amount of overlap in the x and
y directions, returning 0 if there is no overlap. A no-overlap constraint is applied
to each pair (i; j) of nodes where 1 i < j n. This demonstrates one advantage
of using the EGENET approach to solve the layout adjustment problems : it is, in
general, simple to represent the constraints involved in these problems or other related
graph layout problems in the EGENET network. Indeed arbitrary additional
constraints can be added to the problem straightforwardly.
fx fy
fy
Figure
5. The EGENET network for a no-overlap constraint.
We need to determine suitable ranges for the variables of the problem, in order
to make use of the modied EGENET algorithm above. We would like to ensure
that the variables' initial ranges contains a solution to the problem, if it exists. A
conservative approach is to nd a solution using the uniform scaling approach and
then to use the minimal range containing that position and the original value. For
instance, if (x u
v ) is the position computed by the uniform scaling approach for
node v and the position of node v in the original graph is is
then the initial
range for x v can be set to [x otherwise, and similarly for
y v . Clearly this range is guaranteed to include at least one solution. The problem
is that scaling factors tend to be large, so the range is also large.
In practice, by making some assumptions about the density of the nodes we
can start with smaller ranges, and thus reduce the search. Dene max x to be the
maximum overlap in the x direction of any pair of nodes, and dene max y similarly.
We use initial ranges of [x 0
for y v . This means no solution is guaranteed, but this is only true for graph where
there is a dense overlapping in the initial layout. In practice this is rare.
For layout adjustment problems the initial value for the variables x and y returned
by initialize vars is given by the initial layout x 0 and y 0 . One of the reasons for
investigating local search methods is that this initial layout is usually quite close
to the global optimum, and hence local search around the initial layout is likely to
nd a good solution.
The variable ordering strategy var order used simply updates the variables in one
dimension (x) in a random order before or after updating the variables in the other
dimension (y). This is eective since the relationship between the x and y variables
are weak (only through disjunctive constraints). By evaluating each dimension in
turn we get a faster convergence to local minima. Other strategies are, of course,
possible.
The revise bounds function is dened as follows. When a solution is found at x
then for each x i , if x i > x 0
i , we reset the upper bound to be the current value x i and
the lower bound is reset to x 0
i we reset the lower bound to be the current
value x i and the upper bound is reset to x 0
the bounds are unchanged.
Similarly for y. This encodes the strategy that, since we have found a solution on
one side of the starting point, we will not look on the other side, and we will not
look further away as looking farther away will tend to reduce the optimality of the
solution.
For the graph layout problem, stopping criterion returns true when a predetermined
number of iterations have been tried, or when the cost of the current solution
is worse than or equal to the previous solution. The total resource limit is initially
set to 1000. Each time a better solution is found, the resource limit is reset to
double the amount of resource used to nd the previous solution if this is less than
100. Otherwise, the resource limit is reset to the minimum of 1000 or 120% of the
resource used when nding the previous solution.
All that remains is to determine the penalize ctr and penalize opt functions. The
relative weights of the objective function and the constraint penalties are impor-
tant. For example, for the graph layout adjustment problem, if the augmented cost
function is biased towards the penalty for constraint violation, that is node over-
lapping, then EGENET-opt may take a longer time, or even fail, to return a good
solution. 3 On the other hand, if the augmented cost function is biased towards
the solution cost, then EGENET-opt may fail to nd a solution satisfying all the
no-overlap constraints.
Recall the objective function dyn for layout adjustment. As the quantity
(usually in the order of 100 or 1000) is always much larger than
the penalty value of the constraints (which are Boolean) for constraint violations,
we need to normalize this quantity so as to avoid bias towards the solution cost.
If the range of x v is [l v ::u v ] then the maximum value of the term
of (l v c x . Denote this by omax v . The normalized optimization
function for x v is simply
norm x
Clearly, the value returned by norm x real number in the range of
We can similarly dene a normalized optimization function norm y for variables y.
Accordingly, we dene the augmented cost function cost(x; y) as as the sum of the
penalty for constraint violations, and the sum of normalized optimization function
as follows.
(norm x
where m denotes the total number of constraints in the problem.
When a local minimum is found, the penalty function penalize opt changes the
normalized optimization functions to take into account the new smaller ranges for
some variables. The penalize ctr function simply increases the weight i of a violated
constraint by 1 (as in the original EGENET model).
Note that in the EGENET approach we have ignored the orthogonal ordering
constraints, and simply concentrated on the overlap constraints. Since, in practice,
we have found that minimimizing dyn preserves the structure of the original graph,
and so tends to preserve orthogonality, even though there are not explicit constraints
to do so.
6. Using A Pseudo-Lagrangian Method
The EGENET local search method described in the previous section is a modica-
tion of a discrete constrained satisfaction algorithm to solve continuous constrained
optimization problems. In the end, it looks very similar to a Lagrange multiplier
method. This inspired us to directly solve the problem using such an approach.
6.1. Lagrangian Multiplier Methods
Lagrange multiplier methods are a general approach to continuous constrained optimization
that can tackle non-linear objective functions with non-linear constraints.
A general continuous equality-constrained objective function is formulated as follows
minimize f(z) subject to
where f(z) is the objective function and g(z) is a vector of functions representing
the constraint penalties.
The Lagrangian function associated with this problem is a weighted sum of the
objective function and the constraints. It is dened as :
where is a vector of Lagrange multipliers. The Lagrangian function is related to
the local extrema of the problem (8) by the following theorem (see e.g. [13]).
Theorem 1 Let z be a local extremum of f(z) subject to g(z). Assume that z is a
regular point 4 . Then there exists a vector such that
r z f(z)
Figure
6. Overlap calculation.
Based on the above theorem there are a number of method for solving constrained
optimization problems. The most widely used is the rst-order method represented
as an iterative process:
z
where k is a step-size parameter. Intuitively the equations represent counteracting
forces to achieve a good solution to the optimization problem. When a constraint
is violated (11) increases the weight of the constraint, forcing the search towards a
solution. In contrast (10) performs descent in the optimization direction once all
of the Lagrange multipliers are xed.
6.2. Layout Adjustment with Lagrange Multipliers
The Lagrangian approach to layout adjustment performs an iterative process like
that dened above. Rather than a synchronous update of all variables at once,
each node v is treated in turn, and its two coordinate variables x v and y v are
updated together. This allows the non-overlap constraints to be handled in a more
meaningful way.
When two nodes i and j overlap as illustrated in Figure 6, then the overlap
of j on i in eect creates a force on i in the direction shown. If node i moves
either a distance dx ij in the x direction or dy ij in the y direction the overlap will
disappear. We choose the minimum magnitude of dx ij or dy ij as the resulting
constraint violation. In eect, we treat the constraint function g ij as follows:
(0; do not overlap
Given this denition then r x i
otherwise, and
similarly r y i
The two constraint functions g ij and g ji are symmetric and represent a single
underlying overlap constraint. Hence, for each pair we have a
Lagrange multiplier ij representing the current weight of the constraint. The basic
local search is then simply a Lagrangian optimization using a rst-order stepping.
Since the optimization only nds a local minima, one optimization may not nd a
very good solution. After a local minima is reached the pseudo-Lagrangian method
reduces the step-size by half, and also moves the nodes closer to their positions in
the initial layout since this will further decrease the objective function albeit at the
risk of introducing overlapping. This lets the search potentially nd better solu-
tions. Eventually after some number of such reoptimizations the method nishes,
returning the best solution found. For the experiments the initial stepsize was 0.125
and the factor limit 255. The function initialize multiplier simply initialized all the
Lagrange multipliers to 1. The resource limit was never exceeded.
The algorithm in Figure 7 does not take into account the orthogonal ordering
constraints. These could be added using the standard Lagrangian approach of
adding new constraint functions, namely
and
Unfortunately this simple addition can lead to divergent behaviour.
One simple approach to handle the equality constraints of the orthogonal ordering
is instead to simply force their compliance. Let S be a set of coordinates which
must all be equal, e.g. fx g. For each such set we compute the average
x2S x)=jSj and set each value within to this average value, e.g. x 1 :=
This approach while seemingly ad-hoc, has quite a principled
justication. If we replaced the set of variables S by a single variable (in eect
using the equation constraints as substitutions) then the change in this variable
would be determined from the sum of the changes of the variables in the set S.
The average is simply a function of the sum of the changes of the variables in S.
However, this approach does not handle the inequality constraints of the orthogonal
ordering.
As for the EGENET approach, in our empirical evaluation of the Langrangian
approach we have not included constraints to preserve the original orthogonal or-
dering, since again in practice, we have found that minimimizing dyn preserves
the structure of the original graph, and so tends to preserve orthogonality.
initialize multiplier()
best := +1
stepsize := initial step size
repeat
if (i 6=
endif
endfor
endfor
if (g(x;
if (x; y) < best
best := (x; y)
else
stepsize := stepsize=2
initialize multiplier()
endif
endif
until (factor > limit or resource limit exceeded)
return
Figure
7. Pseudo-Lagrangian method for layout adjustment
7. Empirical Evaluation
In this section we compare the performance of the Scaling Algorithm (SCALE)
presented in Section 3, the Force Scan Algorithm (FSA) of Misue et al [17], the
quadratic solver approach (QUAD) presented in Section 4, the modied EGENET
solver described in Section 5 and the Psuedo-Lagrangian method (PLM) discussed
in Section 6 on a set of nine dynamic graph layout adjustment problems.
QUAD is implemented in Borland C++ for Windows Version 4:5. The other
solvers are implemented in C, and compiled by the GCC compiler Version 2:7:3 on
Linux. All tests were performed on a Pentium PC running at 155Mhz.
Table
1. CPU time taken by SCALE, FSA, QUAD, EGENET and PLM
for layout adjustment of the example problems.
Graph # nodes SCALE FSA QUAD EGENET PLM
Table
2. Value of dyn in the adjusted layout using SCALE, FSA, QUAD,
EGENET and PLM for layout adjustment on the example problems.
Graph # nodes SCALE FSA QUAD EGENET PLM
9 17 38880 127008 77760 5535 3688
A quantitative comparison of these dierent methods on the sample problems is
provided in Table 1 and 2. In each table, the rst column gives the identifying
number for the graph while the second column gives the number of nodes in the
sample graph. Table 1 details the CPU time in seconds taken to nd the adjusted
layout for each method while Table 2 details the value of dyn for the adjusted
layout given by each method.
For local search methods such as EGENET and PLM, the averages of CPU time
and cost over 10 successful runs are reported in each case. For all the cases we have
tested, both EGENET and PLM can successfully nd an (sub-)optimal nal layout
without node overlapping. It should be noted that for both EGENET and PLM,
the solvers will halt when the current solution found is worse than the previous
one. For EGENET, the solver will also halt after the, possibly reset, resource limit
is exceeded.
Broadly speaking we nd that the ranking of methods with respect to the CPU
time taken to nd the adjusted layout, from fastest to slowest, is SCALE, FSA,
QUAD, PLM and EGENET. SCALE and FSA are considerably faster than the
other methods, QUAD somewhat faster than PLM and the modied EGENET
solver takes much longer than the other solvers. This is probably because the
modied EGENET solver can only consider a nite number of points in the domains
of each variable for each updating, and slowly revise the domains after each learning.
The values of dyn reported in Table 2 provide a simple numerical measure of the
quality of the adjusted layout. However, it is also important to look at the actual
aesthetic quality of the adjusted layout. For this reason we now look at each of the
example graphs and show the resulting layout adjustment with each method. Note
that we have used an asterisk ( ) to indicate the method giving the (subjectively)
best layout for each example in Table 2.
(a) Graph 1, general (b) Graph 1 with box nodes
(c) Layout with SCALE (d) Layout with FSA
Layout with QUAD (f) Layout with EGENET (g) Layout with PLM
Figure
8. Initial and layout adjustment for Graph 1.
Figure
8(a) and 8(b) respectively show the initial layout of Graph 1 as an idealized
graph with circles as nodes and as a labelled graph. Figure 8(c), 8(d), 8(e), 8(f)
and 8(g) give the resulting layout adjustment using SCALE, FSA, QUAD, mod-
ied EGENET and PLM respectively. All methods provide reasonable layout ad-
justment, although we note that EGENET introduces an edge/node label overlap.
PLM gives the best adjustment.
Note that, as in all of our gures, we have scaled each graph to have a maximum
height or width of one inch. Thus the adjusted layouts may not have the same
scale. Smaller (usually better) layouts may be identied by their relatively larger
node labels.
(a) Graph 2 Layout with SCALE (c) Layout with FSA
(d) Layout with QUAD (e) Layout with EGENET (f) Layout with PLM
Figure
9. Initial and resulting layouts for Graph 2.
Surprisingly for Graph 2 as shown in Figure 9, the simplest method SCALE nds
the best layout, while the other methods only nd a local minimum. All methods
give reasonable layout adjustment with QUAD giving the worst layout.
Figure
shows the initial and resulting layouts for Graph 3 which has been
chosen as an example of the kind of explanation diagram widely used in Biology
or Engineering textbooks. Arguably, QUAD gives the best adjusted layout since
it best preserves the original graph's structure. SCALE increases the size by too
much while EGENET and to a lesser extent PLM change the orthogonal ordering
of the graph and remove its symmetry.
Graph 4 as shown in Figure 11(a) is an example of a rooted tree. Such labelled
graphs are commonly used to display data structures or organization structures in
many real-life applications. All methods give reasonable layout. The worst is by
SCALE since it unnecessarily increases the size, while the layout found by QUAD
is slightly better than that found by PLM and EGENET.
Graph 5 is another example of rooted tree layout. This time PLM gives the best
layout, although it has changed the orthogonal ordering, closely followed by FSA
which preserves the orthogonal ordering. Both QUAD and EGENET introduce an
edge/label crossing.
Graph 6 was carefully designed to show that layout adjustment may introduce
edge crossings even though the layout adjustment method preserves the graph's
orthogonal ordering. Both FSA and QUAD produce the worst layouts since they
introduce edge crossing. SCALE does not introduce any edge crossing in the nal
a) Graph 3 (b) Layout with SCALE (c) Layout with FSA
(d) Layout with QUAD (e) Layout with EGENET (f) Layout with PLM
Figure
10. Initial and resulting layouts for Graph 3.
(a) Graph 4 (b) Layout with SCALE (c) Layout with FSA
(d) Layout with QUAD (e) Layout with EGENET (f) Layout with PLM
Figure
11. Graph 4 | tree layout adjustment (1).
layout since the initial layout does not contain any edge crossing, but the resulting
layout is, as usual, unnecessarily wide. The modied EGENET and PLM methods
produce very similar (and good) layouts with that produced by EGENET slightly
better than that produced by PLM.
As shown in Figure 14(a), Graph 7 is a rather pathological graph with no edges
but with lots of node overlapping occurred at dierent horizontal levels. SCALE
clearly gives the worst layout adjustment while the modied EGENET gives the
best layout which is neatly packed and fairly close to the initial layout. On the other
hand, PLM gives a stack-like layout which looks quite dierent from the original
graph.
a) Graph 5 (b) Layout with SCALE (c) Layout with FSA
(d) Layout with QUAD (e) Layout with EGENET (f) Layout with PLM
Figure
12. Graph 5 | tree layout adjustment (2).
Graph 8 shown in Figure 15(a) is a simplied version of Graph 7 with some nodes
removed. Thus, there is less node overlapping in Graph 8. All of the algorithms
produce a result closer to the original graph, but in this case PLM gives the most
aesthetically pleasing layout.
Graph 9 is an X-shaped graph with symmetry about both the x- and y-axis.
All layouts are reasonable. The layouts by SCALE, FSA and QUAD retain the
symmetry while those produced by the modied EGENET and PLM do not. QUAD
produces the best layout.
As we can see no method produces uniformly better layout adjustment, even
SCALE produces the best layout for one example. However in general, SCALE and
FSA produce the worst layout. In general, PLM closely followed by QUAD produce
the best layout. However, EGENET and PLM can lose the original structure of
the graph and to preserve this structure we would need to add extra constraints
into the modied EGENET and PLM solvers. Generally, QUAD and FSA preserve
the structure, in part because they preserve the orthogonal ordering, but may still
introduce edge overlapping not found in the original graph. SCALE is guaranteed
to preserve all of the original structure since it performs a simple uniform scaling.
(a) Graph 6 without node labels (b) Graph 6 with node labels
(c) Layout with SCALE (d) Layout with FSA (e) Layout with QUAD
(f) Layout with EGENET (g) Layout with PLM
Figure
13. Initial and resulting layouts for Graph 6.
7.1. Resource-bounded Layout Adjustment
Although the PLM solver produces the best layout, Table 1 suggests that it is
substantially slower than the SCALE and FSA solvers. A natural question to ask
is, given a (small) xed amount of time, which method will give the best layout?
This question makes sense because the PLM solver employs local search techniques
and at any point in time has a current best solution. Thus we can stop the PLM
solver after any time interval and look at the quality of the solution and compare
this to that of the SCALE and FSA solvers. It does not make sense to perform
the same experiment with QUAD since there is no concept of \the current best
solution."
The graph in Figure 17 shows the cost of the best solution found so far (as a
multiple of the best found eventually) versus time for the graphs 1, 7 and 9 during
the execution of the PLM algorithm. The rst non-overlapping solution is found
for each graph within 0.02 seconds. Except for Graph 9 the value of dyn for this
solution is smaller than that of the solution eventually found by SCALE, FSA or
a) Graph 7 (b) Layout with SCALE (c) Layout with FSA
(d) Layout with QUAD (e) Layout with EGENET (f) Layout with PLM
Figure
14. Initial and resulting layouts for Graph 7.
(a) Graph 8 (b) Graph 8 by SCALE (c) Graph 8 by FSA
(d) Graph 8 by QUAD (e) Graph 8 by EGENET (f) Graph 8 by PLM
Figure
15. Initial and resulting layouts for Graph 8.
a) Graph 9 (b) Layout with SCALE (c) Layout with FSA
(d) Layout with QUAD (e) Layout with EGENET (f) Layout with PLM
Figure
16. Initial and resulting layouts for Graph 9.
QUAD (for Graph 9 the rst solution has cost 8108). For each of the graphs PLM
nds a solution whose value of dyn is within 20% of the eventual best in less than
half the time required to nd the best. For these examples we could safely stop
PLM after 0.15 seconds and obtain a solution within 20% of the best found and, in
better than any solution found by the other algorithms.
8. Conclusion
We have studied the problem of layout adjustment for graphs in which we wish to
remove node overlapping while preserving the graph's original structure and hence
the user's mental map of the graph.
We have given four algorithms to solve this problem, all of which rely on viewing
it as a constrained optimization problem. Empirical evaluation of our algorithms
shown that they are reasonable fast and give nice layout, which is better than
that of the comparable algorithm, the FSA of Misue et al. Generally speaking,
the approach based on Lagrangian methods gives the best layout in a reasonable
time. However, it may not preserve the orthogonal layout of the original graph.
The quadratic programming approach also produces good layout in a reasonable
time and does preserve the graph's orthogonal ordering. Uniform scaling gives the
fastest and simplest approach to layout adjustment. It may lead to unnecessary
enlargement of the graph, but is guaranteed to preserve all of the original structure
in the graph.
Our results are not only interesting for layout adjustment of graphs: They also
suggest techniques for laying out non-overlapping windows and labels in maps.
graph 9
graph 7
graph 1
Figure
17. Cost of best solution found so far against time for graphs 1, 7, and 9
9.
Acknowledgement
We would like to thank Peter Eades for his comments on our work, and Yi Xiao
for the quadratic solver.
Notes
1. Note that unless the number of pixels is used, real numbers are usually used to denote the
positions and sizes of the nodes in a graph layout since it allows greater
exibility.
2. Even though the global optimality of the resulting solution cannot be guaranteed by the
EGENET approach as a local search method, the experimental results of the related GLS
approach reported in [25] show that for a set of the real-life military frequency assignment
problems GLS always found better solutions than those found by conventional search methods.
3. Here, we mean a solution with its cost close enough to the globally optimal cost of the optimization
problem.
4. A regular point of constraints g is one where rg 1 are linearly independent.
--R
Boltzmann machines for traveling salesman problems.
A discrete stochastic neural network algorithm for constraint satisfaction problems.
Algorithms for drawing graphs: an annotated bibliography.
Solving linear arithmetic constraints for user interface applications.
GENET: A connectionist architecture for solving constraint satisfaction problems by iterative improvement.
Solving small and large scale constraint satisfaction problems using a heuristic-based microgenetic algorithm
Online animated graph drawing using a modi
Preserving the mental map of a diagram.
Removing node overlapping using constrained optimization.
Extending GENET for Non-Binary CSP's
Towards a more e-cient stochastic constraint solver
busting in anchored graph drawing.
Consistency in networks of relations.
Layout adjustment and the mental map.
Experimental and theoretical results in interactive orthogonal graph drawing.
Edge: an extendible graph
Automatic graph drawing and readablity of diagrams.
Foundations of Constraint Satisfaction.
The tunneling algorithm for partial csps and combinatorial optimization problems.
Partial constraint satisfaction problems and guided local search.
Methods of Optimization.
Solving satisfaction problems using neural-networks
--TR
--CTR
Huang , Wei Lai, Force-transfer: a new approach to removing overlapping nodes in graph layout, Proceedings of the twenty-sixth Australasian conference on Computer science: research and practice in information technology, p.349-358, February 01, 2003, Adelaide, Australia
Wanchun Li , Peter Eades , Nikola Nikolov, Using spring algorithms to remove node overlapping, proceedings of the 2005 Asia-Pacific symposium on Information visualisation, p.131-140, January 01, 2005, Sydney, Australia
Huang , Wei Lai , A. S. M. Sajeev , Junbin Gao, A new algorithm for removing node overlapping in graph visualization, Information Sciences: an International Journal, v.177 n.14, p.2821-2844, July, 2007
Huang , Peter Eades , Wei Lai, A framework of filtering, clustering and dynamic layout graphs for visualization, Proceedings of the Twenty-eighth Australasian conference on Computer Science, p.87-96, January 01, 2005, Newcastle, Australia | constrained optimization;graph layout;disjunctive constraints |
634941 | Capillary instability in models for three-phase flow. | Standard models for immiscible three-phase flow in porous media exhibit unusual behavior associated with loss of strict hyperbolicity. Anomalies were at one time thought to be confined to the region of nonhyperbolicity, where the purely convective form of the model is ill-posed. However, recent abstract results have revealed that diffusion terms, which are usually neglected, can have a significant effect. The delicate interplay between convection and diffusion determines a larger region of diffusive linear instability. For artificial and numerical diffusion, these two regions usually coincide, but in general they do not.Accordingly, in this paper, we investigate models of immiscible three-phase flow that account for the physical diffusive effects caused by capillary pressure differences among the phases. Our results indicate that, indeed, the locus of instability is enlarged by the effects of capillarity, which therefore entails complicated behavior even in the region of strict hyperbolicity. More precisely, we demonstrate the following results. (1) For general immiscible three-phase flow models, if there is stability near the boundary of the saturation triangle, then there exists a Dumortier-Roussarie-Sotomayor (DRS) bifurcation point within the region of strict hyperbolicity. Such a point lies on the boundary of the diffusive linear instability region. Moreover, as we have shown in previous works, existence of a DRS point (satisfying certain nondegeracy conditions) implies nonuniqueness of Riemann solutions, with corresponding nontrivial asymptotic dynamics at the diffusive level and ill-posedness for the purely convective form of the equations. (2) Models employing the interpolation formula of Stone (1970) to define the relative permeabilities can be linearly unstable near a corner of the saturation triangle. We illustrate this instability with an example in which the two-phase permeabilities are quadratic.Results (1) and (2) are obtained as consequences of more general theory concerning Majda-Pego stability and existence of DRS points, developed for any two-component system and applied to three-phase flow. These results establish the need for properly modelling capillary diffusion terms, for they have a significant influence on the well-posedness of the initial-value problem. They also suggest that generic immiscible three-phase flow models, such as those employing Stone permeabilities, are inadequate for describing three-phase flow. | Introduction
The standard model for three-phase
ow in petroleum reservoirs is based on the two-component
system of partial dierential equations
@
@t
@x
@x
@
@x
U
Date: November, 1999.
This work was supported in part by: FAP-DF under Grant 0821 193 431/95; CAPES under Grant
BEX0012/97-1; FEMAT under Grant 990003; CNPq under Grant CNPq/NSF 910029/95-4; CNPq under
Grant 520725/95-6; CNPq under Grant 300204/83-3; MCT under Grant PCI 650009/97-5; FINEP under
Grant 77970315-00; CNPq under Grant 301411/95-6; NSF under Grant DMS-9732876; DOE under Grant
DE-FG02-90ER25084; ONR under Grant N00014-94-1-0456; and NSF under Grants DMS-9107990 and
DMS-9706842.
denote the saturations of two of the
three-phases (viz., water, gas, and oil), f 1 and f 2 are their respective fractional
ow functions,
and B encodes the eects of capillary pressure. The variables u 1 and u 2 take values on the
saturation triangle := f g. A physical derivation of this
model is given in Sec. 4.1.
It is standard practice to neglect the capillary terms and consider Eq. (1.1) from the
point of view of hyperbolic conservation laws. A well-known diculty with this approach is
that, generically, three-phase
ow models give rise to elliptic regions, i.e., regions where the
eigenvalues of the Jacobian F 0 (U) are not real [5, 37, 38, 41, 20, 21, 30]. In these regions, the
initial-value problem for the equations with B(U) 0 are ill-posed, and numerical solutions
of them display rapid oscillations with large-amplitudes on ner and ner scales as the mesh
spacing tends to zero. However, at least at one time, there seems to have been a general
feeling that the conservation laws should yield a satisfactory hyperbolic theory outside of
the elliptic region. See, e.g., Refs. [18, 19, 34, 14, 13], in which the structure of Riemann
solutions is studied for certain Stone models in the absence of capillarity eects.
The role of elliptic regions in three-phase
ow models has been investigated by several
authors. Bell, Shubin, and Trangenstein studied numerically the Riemann solution of a
version of Stone's model [5]. The model they considered has a long, thin elliptic region,
which appears to be unrelated to the elliptic region that occurs in the present paper [25].
Keytz [24, 25, 26] studied rigorously the wave structure for a model having a strip as an
elliptic region. She explained many of the wave features observed near the boundary of
the elliptic region of Ref. [5]. Riemann solutions for other systems of conservation laws
having innite strips as elliptic regions have been studied extensively by Slemrod [39] and
Shearer [36]. In all these works, simplied or numerical diusion terms were utilized to
determine admissibility conditions for shock waves.
More recently, results in Refs. [27, 8, 2, 3, 17] have emphasized a more primary cause of
anomalous behavior: linearized instability of the complete diusive system, Eq. (1.1), rather
than ill-posedness of its purely convective form. These two concepts coincide in the case of
articial diusion B I, or more generally when convective and diusive eects commute;
however, as shown by Majda and Pego, the role of dissipation structure is quite important
when other than such special diusion terms are considered. In Ref. [29], they developed
a useful sucient condition in terms of B(U) and F 0 (U) for the linearized instability of
Eq. (1.1). The corresponding \Majda-Pego instability region" contains, but is typically
larger than, the elliptic region. Nonuniqueness [1, 3] or nonexistence [6, 7, 33] of solutions
for Riemann problems, the latter manifested as highly oscillatory waves (which are measure-valued
solutions) [17, 16, 15], can occur in the Majda-Pego instability region, even in zones
of strict hyperbolicity.
Accordingly, we study here a model with a physically correct diusion matrix, taking full
account of capillarity. The resulting diusion matrix is not a multiple of the identity matrix
and gives rise to the eects mentioned above. Notice that numerical diusion typically
commutes with convective terms (indeed, for standard schemes the diusion matrix B(U) is
a polynomial in F 0 (U)); hence these eects (having to do with noncommutativity) do not
arise or are not captured by standard hyperbolic dierence schemes. Of course, numerical
simulations which fully resolve the parabolic equation (1.1) do capture the eects discussed
here.
More precisely, we study a model with (a) permeabilities obtained using Stone's interpolation
formula from quadratic two-phase permeabilities and (b) Leverett's capillary pressure
functions. Stone's model is widely used in petroleum engineering. The purely convective
form of this model generically features an open elliptic region in the interior of the saturation
triangle along with three isolated points of nonstrict hyperbolicity (umbilic points) at the
corners of . For articial or numerical diusion, the linear instability region consists of the
elliptic region and the three corners. We nd that, when physical (viz., Leverett) capillarity
is taken into account, each of these regions of linear instability expands into the region of
strict hyperbolicity.
In particular, we identify model parameters for which there are wedges at the corners
inside (respectively, outside) of which the model is linearly unstable (resp., stable). This
phenomenon is rather unexpected, since the instability occurs near the boundary, where
modeling should be relatively accurate and the system might be expected to be well-behaved.
On the other hand, if a particular Stone's model is well-behaved near the boundary, or
the corner instability regions are suciently small, we show that a Dumortier-Roussarie-
Sotomayor (DRS) bifurcation point must exist in the interior of . Such a point lies on the
boundary of the Majda-Pego instability region (see the denition in Sec. 2) and generically
lies in the strictly hyperbolic region (where the eigenvalues of F 0 (U) are real and distinct).
As shown in Ref. [3], a nondegenerate DRS point gives rise, in its vicinity, to multiple
solutions of Riemann problems for the purely convective form of the governing equations.
Both linear instability and nonuniqueness are outside the range of \good" behavior for
hyperbolic conservation laws.
The paper divides into two parts. In the rst part, consisting of Secs. 2 and 3, we investigate
existence of DRS points in general two-component models following a degree-theoretic
approach. Our main result is to show that generically there exists a DRS point within any
simple closed curve on which: (i) all points are strictly hyperbolic and Majda-Pego stable;
and (ii) the eigendirections of F 0 (U) rotate by an odd multiple of as is traversed. This
greatly generalizes an existence theorem proved in Ref. [3] for special, quadratic
ux models
by direct calculation. The proof relies on a detailed decomposition of the Majda-Pego
instability region.
In the second part, consisting of Secs. 4{6, we consider the three-phase
ow model described
above. We determine the Majda-Pego stability of edge and corner points and calculate
the rotation of the eigendirections of F 0 (U) around a contour near @, thereby
verifying the hypotheses for existence of DRS points. At the same time, we give a detailed
discussion of capillarity and its relationship to well-posedness of Eq. (1.1), which we hope is
of general use to the reader.
2. Linear Stability and DRS Bifurcation
Consider a general two-component system of conservation laws of the form in Eq. (1.1),
where F is C 3 and B is C 2 in a particular open subset U of state space; further, assume
that B is strictly parabolic in the sense that, for each U 2 U , the eigenvalues of B(U) have
strictly positive real parts. In this section, we explore a connection between linear stability of
a constant solution and DRS bifurcation via the stability condition of Majda and Pego [29].
2.1. Majda-Pego stability condition. We begin by recalling, and slightly extending, the
results of Ref. [3] on bifurcation of admissible shock waves. An admissible shock wave is a
traveling wave solution U(x; t) of system (1.1). If denotes its speed and U(x;
x !1, then such a traveling wave corresponds to an orbit of the system of two ordinary
dierential equations
_
which we regard as a dynamical system parameterized by (U
In Ref. [3], the bifurcation of nonclassical (overcompressive and transitional) shock waves
from a constant solution was investigated. It was shown that this bifurcation corresponds
to a codimension-three bifurcation, studied under certain nondegeneracy conditions by Du-
mortier, Roussarie and Sotomayor [12, 11], that occurs at a DRS point, viz., a point (U ;
U R satisfying:
(D1) is an eigenvalue of F 0 (U ), with associated right and left eigenvectors r and ' ;
Associated with DRS bifurcation are the phenomena of nonuniqueness of Riemann solutions
and nontrivial asymptotic behavior, as described in the introduction.
Remark. We have substituted the condition ( e
for the original condition
One consequence of our analysis below is that, given condition (D1), conditions (D3) and ( e
are equivalent except if U is an umbilic point, i.e., F 0 (U
will prove
to be more convenient in calculations.
One of the main observations of Ref. [3] is a link between DRS bifurcation and linear
instability of the constant solution U(x; t) U of system (1.1). This link, however, was
explored only when U is a point of strict hyperbolicity. Our rst task, therefore, is to establish
a relationship between DRS bifurcation and linear instability when strict hyperbolicity
fails at U .
By Fourier analysis, L 2 linearized stability of the constant solution U(x; t) U of system
(1.1) requires the following condition:
for all real k, each eigenvalue of ikF 0 (U nonpositive real part. (2.2)
Standard matrix perturbation theory shows that the eigenvalues
s (k) and
f (k) of the
have the following expansion around
where a
s and a
f denote the eigenvalues of F 0 (U ). Moreover, if the eigenvalues a
are
distinct, and '
denote left and right eigenvectors associated to a
normalized so that
Therefore necessary conditions for linear stability are hyperbolicity at U and, in case U is
a point of strict hyperbolicity,
Denition 2.1. We will refer to the inequality
as the Majda-Pego stability condition for family j, and we let the Majda-Pego stable region
MP be the set of strictly hyperbolic points U for which the Majda-Pego stability condition
holds for both families.
Evidently, condition (D3), in the context of condition (D1), represents neutral failure of the
Majda-Pego stability condition (2.6). As mentioned above, condition (D3) can be replaced
by condition ( e
within the region of strict hyperbolicity [3]. Conditions (D1) and ( e
imply that the trace and determinant of B(U Following Ref. [8],
we dene the Bogdanov-Takens locus (or BT locus) as
Bogdanov-Takens bifurcation in the structure of weak traveling waves occurs at points of BT
satisfying certain nondegeneracy conditions [25, 8, 9]. Similarly, we dene the coincidence
locus, where the characteristic speeds coincide, to be
It was observed in Refs. [8, 3] that if U is a point of strict hyperbolicity and U 2 @MP ,
then U 2 BT . We will show in this section that the assumption of strict hyperbolicity is
super
uous; in other words, the boundary of the Majda-Pego stable region consists entirely
of points satisfying conditions (D1) and ( e
D3). To prove this statement, let us introduce the
following real-valued functions dened for U
The signicance of E and BT is that solves the equation tr[F 0 (U)
and solves the equation tr[B(U) 1
Lemma 2.2. Let U 2 U . If a s and a f denote the eigenvalues of F 0 (U), then
Moreover,
is 1=4 times the discriminant of F 0 (U), and if F 0 (U) is diagonalizable (in particular, if
right eigenvectors of F 0 (U) associated to a j , normalized so
that
6 AZEVEDO, MARCHESIN, PLOHR, AND ZUMBRUN
Proof. Equations (2.15) and (2.16) follow immediately from the denitions of g E and g BT .
Substituting Eq. (2.15), we obtain Eq. (2.17). Similarly, as B(U)
is a 2 2 matrix,
which yields Eq. (2.18) when substituted in Eq. (2.16).
Equations (2.9), (2.15), and (2.16) imply the following result.
Proposition 2.3. The functions g E and g BT are related by
Another consequence of Lemma 2.2 concerns E \ BT .
Proposition 2.4. If U 2 E is not an umbilic point, then the following are equivalent:
(a) U 2 BT ;
(c) the unique eigendirection of F 0 (U) is an eigendirection of B(U).
Proof. Statements (a) and (b) are equivalent by Eq. (2.21).
As the matrix F 0 (U) E (U)I is nilpotent; and because U is not an umbilic
point, this matrix is nonzero. Let r 6= 0 span its kernel. Since the square of this matrix is
zero, r also spans its range. In other words, F 0 (U)
If statement (a) holds, then with denoting the common value of E (U) and BT (U ),
nilpotent and nonzero, and r belongs to its kernel. In fact,
since the square of this matrix is zero, r also spans its range. Let s be a vector such that
cr for some c, by nilpotency, and therefore
statement (c) is true.
Conversely, if statement (c) holds, trfB(U) 1 [F 0 (U) Hence
which implies statement (b).
Proposition 2.5. We have that
In particular, @MP g.
Proof. By the strict parabolicity of B, 0 < tr Therefore
Eq. (2.18) shows that, given strict hyperbolicity at U , g BT (U) < 0 if and only if U 2 MP.
By Prop. 2.3, strict hyperbolicity (g E (U) < 0) follows from the condition g BT (U) < 0.
This result was proved for strictly hyperbolic points in Prop. 2.1 in Ref. [3]. Therefore
the new content concerns points of E . In this regard, it is useful to determine the behavior
of eigenvectors of F 0 (U) as U approaches a point U 2 E from within the strictly hyperbolic
region.
Lemma 2.6. Suppose that U 2 U has an open neighborhood V 0 containing no umbilic
points. Also assume that g E (U ) 0, and that DgE (U
has an open neighborhood V V 0 such that the following statements hold.
(a) The sets VH := f U of hyperbolic
and strictly hyperbolic points in V are connected.
(b) In the hyperbolic region VH , the eigenvalues of F 0 are continuous functions a s a f .
(c) Corresponding to the eigenvalues a j , for there exist: continuous right eigenvector
elds r j on VH , normalized so that jr 1, such that det(r s ; r f ) has a xed sign
continuous left eigenvector elds ' j in
normalized so that ' j r
(d) Suppose that U 2 E \V. If U ! U from within V SH , then ' s (U)=j' s
Proof. By the hypothesis concerning g E and the Implicit Function Theorem, U has an open
neighborhood statement (a) holds. We can also assume that if U
Statement (b) holds because F is C 1 . For each U 2 VH and the
matrix F 0 (U) a j (U)I has rank one (because there are no umbilic points in V 0 ); let R j (U)
denote its kernel. Then by statement (b), R j is a continuous line eld on VH . Moreover,
Let r
vectors lying in R j (U ); if U 2 E , take r
f . By choosing
smaller, if necessary, we can assume that the angle between R j (U) and R j (U ) is less
than =2 for all U 2 VH . Then for each U 2 VH be the unit vector
that has angle less than =2 relative to r
. Note that det(r s ; r f ) does not vanish in V SH ,
and hence has xed sign by connectivity. Therefore we can dene ' s (U) and ' f (U) to be
the rows of the inverse of the matrix (r s (U); r f (U)). The vectors r j and ' j so dened satisfy
statement (c). Statement (d) is a consequence of the normalization ' j r and the fact
that r
Remark. In the context of this lemma, consider a point U 2 E \ V, and suppose that
2 BT . By Prop. 2.4, the unique eigendirection of F 0 (U ) is not an eigendirection of
. By
Prop. 2.3, g BT (U ) > 0, so that g BT > 0 throughout an open neighborhood W V of
U . Hence, by formulae (2.17) and (2.18), ' s Br s and ' f Br f have xed and opposite signs
throughout W SH := W \V SH . In fact, by statement (d) of Lemma 2.6, these two quantities
have the signs of s (r ) ? B(U )r , respectively. Thus we see that, if s (r ) ? B(U )r is
positive (respectively, negative), W SH consists of points at which the Majda-Pego stability
condition is violated for the slow (resp., fast) family. Moreover, with
. Referring to Eq. (2.4), we see that this result re
ects
the transition from the second-order (in instability in the strictly hyperbolic region to the
rst-order instability in the strictly elliptic region.
2.2. Decomposition of the Majda-Pego unstable region. Let us dene
to be the Majda-Pego unstable region. Also
let
denote the elliptic region, slow-family instability region, and fast-family instability region,
respectively. Here, for a strictly hyperbolic point U , ' j and r j denote any left and right
eigenvectors associated to the eigenvalue a j of F 0 (U ), normalized so that ' j r
Proposition 2.7. The
sets
s ,
and
f form a partition
of
. In particular, the elliptic
region is contained in the Majda-Pego unstable region. Moreover, the boundaries
@
@
s ,
and
@
f are contained in E [ BT .
Proof.
and
by strict
parabolicity,
s
2.3,
If, on the other hand, g E (U) < 0,
then, by Eq. (2.18), g BT (U) 0 if and only if one of l s B(U)r s and ' f B(U)r f is nonpositive.
Thus
s
f
. The statement about boundaries is evident from the denitions of
the sets involved.
We now make the following nondegeneracy hypotheses on the model parameters of system
(1.1) for the domain U of interest:
(H1) at each point U 2 U such that g E
(H2) at each point U 2 U such that g BT
By the Implicit Function Theorem, the loci are
smooth curves. Also, E is the same as the elliptic boundary
@
and BT is the same as the
Majda-Pego boundary
Remark. A simple calculation shows that DgE is an umbilic point, i.e., F 0 (U)
is a multiple of the identity matrix. Therefore hypothesis (H1) excludes umbilic points.
In particular, we can restate one of the conclusions of Prop. 2.4 as: if g E
only if the unique eigendirection of F 0 (U) is an eigendirection of B(U ).
Let U be such that g E
also shows that D(gE g BT In
other words, E and BT are tangent where they meet. In view of this result, we are led to
adopt a higher-order nondegeneracy assumption:
(H3) at each point U 2 U such that g E
for all vectors V 6= 0 such that DgE (U)
Under this hypothesis, the points where E and BT intersect are isolated.
Proposition 2.8. Assume hypotheses (H1){(H3). Away from the discrete set E \ BT , the
boundaries
@
s and
@
f are smooth curves coinciding locally with either E or BT . Moreover,
@
s \
@
Proof. Consider a point U 2
@
s that does not lie in E \ BT . (The proof if U 2
@
f is
analogous.) Then, by Prop. 2.3 and Eq. (2.18), either (1) g E (U
Case (1). By assumption (H1), there exists an open neighborhood V of U such that
g. As g BT (U ) > 0, we can assume
CAPILLARY INSTABILITY IN MODELS FOR THREE-PHASE FLOW 9
that (ii) g BT > 0 throughout V. By Lemma 2.6, we can further assume that properties (a){
(c) hold as well. Properties (ii) and (a){(c), together with Eqs. (2.17) and (2.18), imply that
Therefore
so that
(@
Thus
@
s coincides with E in V.
Notice also that U cannot lie in the
closure
because V is a neighborhood of U that
does not
intersect
denition
comprises points of V SH for which ' f B r f 0,
Case (2). By assumption (H2), there exists an open neighborhood V of U such that
g. As
we can assume that (ii) g E < 0 throughout V. By Lemma 2.6, we can further assume
that properties (a){(c) hold as well. Because B is strictly parabolic and ' s Br s 0 at
U , we must have that ' f Br f > 0 at U ; therefore we can assume that (iii) ' f Br f > 0
throughout V. Properties (ii), (iii), and (a){(c), together with Eqs. (2.17) and (2.18), imply
that
so that
(@
@
s coincides with
BT in V.
Finally, consider a point U 2
@
s \
@
f . If g E (U ) were negative, then both '
s B(U )r
s
and '
f B(U )r
f would vanish, which is impossible because of the strict parabolicity of B.
Hence so that g BT (U ) 0 by Prop. 2.3. But it was just demonstrated that
cannot belong to
both
s
and
f . Therefore
Remark. As we traverse
@
the family for which the Majda-Pego condition (2.6)
fails can switch only at points common
to
s
and
f , which necessarily belong to the elliptic
boundary
@
To see how this switch can occur, let U 2 E \ BT and consider
traversing instead the elliptic boundary (which is tangent to BT ).
s be as in Lemma 2.6. By taking V smaller if necessary, we may
assume that E \ V and BT \ V are smooth curves meeting only at U , where they have
rst-order, but not second-order, contact. Let U 2 E \ V. According to the discussion
in the remark following Lemma 2.6, if U
belongs to
@
s
(respectively,
@
according as s (r ) ? B(U )r is positive (resp., negative). Therefore
the family for which the Majda-Pego stability condition (2.6) fails switches if and only if
changes sign as U passes U . This change could be assured, for example, by
another nondegeneracy hypothesis.
These considerations show that, from the original perspective of traversing BT , the family
for which stability fails typically switches at elliptic points of BT . An example of this
phenomenon is illustrated in Fig. 3.1 below.
3. Existence of a DRS Point
We are now ready to establish a general result concerning existence of DRS points. We seek
a point at which conditions (D1){( e
are satised. By Prop. 2.5, it is sucient to identify
a point U on the boundary of the Majda-Pego stable region MP for which condition (D2)
holds, i.e.,
where r and ' are the eigenvectors of F 0 (U ) determined by conditions (D1) and (D3).
Our strategy is to show that the directions of ' and r rotate by an odd multiple of as
U traverses @MP . Since the left-hand side of Eq. (3.1) is homogeneous of odd degree in '
and r , we can conclude that it vanishes at some point on @MP .
3.1. Degree. In the hyperbolic region f U g, the eigenvalues of the
are continuous functions a s (U) a f (U ). Under hypothesis (H1), the matrix
F 0 (U) a j (U)I has rank one throughout the hyperbolic region, as there are no umbilic points
(see the remark after condition (H2)). Therefore the left and right kernels of F 0 (U) a j (U)I
are continuous left and right line elds L j and R j . Notice that for U
and R s
Let be a continuous, oriented closed curve in R 2 , and let R be a continuous line eld
on . Then the degree of R with respect to , denoted by deg(R; ), is 1= times the angle
through which R(U) rotates as U traverses once around (see Ref. [22, Chapter III, Sec. 1]).
For our purposes, we shall require a slightly more general notion of degree, dened for
boundaries of suitably nice sets. Toward this end, dene deg(R; ) as above even if is
not closed. (Of course, this number is not necessarily an integer.) If is decomposed as a
succession of continuous, oriented curves 1
(see Ref. [10, Chap. XVI, Sec. 1]).
Denote by C the class of bounded, nonempty sets W R 2 such that @W consists of a
nite union of piecewise smooth curves, intersecting at a discrete set of points, that can be
oriented unambiguously by the outward-normal convention (i.e., at each point of @W that is
not one of the intersection points, W is locally dieomorphic to a half-plane). In particular,
by Props. 2.7 and 2.8, MP,
s
and
s
f belong to C.
If W 2 C and R is a continuous line eld on @W, we can decompose @W as a succession
of continuous, oriented curves 1
deg(R; @W) :=
This number is independent of the decomposition. Moreover, it is an integer because any
decomposition of @W into continuous, oriented curves can be grouped into a union of closed
curves by forming the boundaries of the connected components of the set obtained from W
by removing the nitely many intersection points.
The following useful properties of the degree hold.
(P1) The degree deg(R; ) is invariant under homotopies of both and R
disjoint members of C, and suppose that W :=
belongs to C. If R is a continuous line eld on @W 1 , @W 2 and @W, then deg(R;
R is a continuous line eld on W , then deg(R;
For property (P1), see, e.g., , Ref. [42, Sec. 34]. Property (P2) holds because @W 1 and
consist of @W together with a nite number of pairs of coinciding curves with opposite
orientations. Property (P3) is proved for a simply connected region W by shrinking @W to a
point and invoking homotopy invariance; it is proved in general by subdividing the compact
set W into simply connected regions.
3.2. Main result. We can now state the main result of this section.
Theorem 3.1. Let hypotheses (H1){(H3) hold on a bounded region U , and suppose that
is a (nontrivial) piecewise smooth simple closed curve lying within MP. If deg(R
, then there exists a DRS point strictly inside .
We will prove this theorem using a series of lemmas. Let denote the closed bounded
region with boundary . Then belongs to the class C. Without loss of generality, we may
assume that the Majda-Pego unstable
region
contained in the interior of
. Notice
that
is nonempty; otherwise would be contained in the strictly hyperbolic
region and R k would be a continuous line eld on , so that deg(R
would vanish by property (P3).
Lemma 3.2. Under the hypotheses of Theorem 3.1, deg(R
and the degrees deg(R s ; @
, and deg(L f ; @
all equal this common
value.
Proof. Since, for each U in the strictly hyperbolic region, R s (U) and R f (U) are linearly
independent directions in the plane, R s and R f have equal degrees around . Likewise, by
orthogonality of left and right eigenvectors, deg(L
For the line eld R j is continuous throughout
because this set is contained
in the (non-strictly) hyperbolic region. Further,
belongs to class C by the assumed
smoothness of
@
and its separation from = @. Therefore
Similarly deg(L
@On
s
f , dene the new line elds
e
R s
in
in
and e
in
in
The line eld e
R is well-dened and continuous
on
because, by Prop. 2.8, R s and
on
s
Similarly, e
L is well-dened and continuous. Moreover, for
@
e
Lemma 3.3. Under the hypotheses of Theorem 3.1, deg( e
and deg( e
Proof. By properties (P2) and (P3) of the degree,
@
@
@
@
R. But by Eq. (3.6), deg(R;
@
independent of the choice R
R f , or e
R. Therefore Lemma 3.2 implies that deg( e
Similarly, deg( e
@ equals this value.
Proof of Theorem 3.1. By hypothesis (H1), conditions (D1) and ( e
are satised on
@
R and ' 2 e
L. It remains only to show that for some U 2 @
On the hyperbolic region, in particular on BT , the continuous line elds e
R and e
locally continuous, nonzero vector elds r and ' ; indeed, each line eld locally generates
precisely two branches. Choose a connected component of
@
around which e
R and e
L have
odd degree, and choose a particular branch for r and ' at some initial point on this
component. Following this branch while traversing once around this component, we nd by
Lemma 3.3 that the quantity ' F 00 (U )(r ; r ) must change sign, since both e
L and e
R rotate
by an odd multiple of . Thus it vanishes somewhere on @
Remark. Arguments with a similar topological
avor have been used to establish the existence
of points at which strict hyperbolicity fails [37, 38]. In [20, 21, 41, 30], strict hyperbolicity
failure was established with dierent arguments.
Remark. The DRS point generically lies in the region of strict hyperbolicity. If it were to
occur on E , Prop. 2.4 indicates that r would have to be an eigendirection of B(U ).
Example 3.4. For generic two component models with quadratic
ux F and constant matrix
B, the elliptic boundary and the BT locus intersect in precisely two points, unless B is
a multiple of the identity matrix, in which case they coincide. Indeed, one can show that
the eigenvector r undergoes a rotation of angle as U traverses the elliptic boundary
(it suces to compute this rotation for homogeneous quadratic models). Thus the direction
of r coincides precisely once with each of the eigendirections of B. If B has more than
two eigendirections, then B is a multiple of the identity matrix, in which case E and BT
coincide. Otherwise B has only two eigendirections, and there are precisely two points in
may verify that the family in which Majda-Pego stability fails switches at
these two points, illustrating the conclusions of the remark at the end of Sec. 2.2.
Figure
3.1 shows MP and the decomposition
of
s
f for a model such
that
is compact. The points in E \ BT are indicated by two small open circles, the three
lled circles represent DRS points, and the rectangles mark line eld singularities [31, 23].
The curves through the rectangles constitute the in
ection locus I, along which genuine
nonlinearity fails in the family f or s indicated by whether the curve is solid or dashed.
4. Three-Phase Flow
In this section, we introduce a class of systems of the type given by Eq. (1.1), which are
used in Petroleum Engineering to model three-phase
ow in porous media. We discuss the
properties of the model that have a role in determining features such as Majda-Pego stability
regions and DRS points.
One of the main consequences of the analysis of important classes of three-phase
ow
models in Sec. 5, under the generic assumptions that such models satisfy (H1){(H3), follows
from Theorem 3.1. It is the following.
Theorem 4.1. Either the Majda-Pego instability
region
accumulates at the boundary of
the saturation triangle @ or else there exists a DRS point interior to .
MP
s
f
I
Figure
3.1. The Majda-Pego stable region MP , the decomposition of the
Majda-Pego unstable
region
s
f , and the coincidence, Bogdanov-
Takens, and in
ection loci E , BT , and I for a quadratic model.
4.1. The basic equations. We consider one-dimensional, horizontal
ow of three immis-
cible
uid phases in a porous medium [32]. For concreteness, we consider a
uid composed
of gas, oil and water, mixed at macroscopic level. The dierences among these phases lie
in some
ow properties. We assume that the whole pore space is occupied by the
uid and
that there are no sources or sinks. Compressibility, thermal and gravitational eects are
considered to be negligible.
The equations expressing conservation of mass of water, gas, and oil are
@
@t
@x
respectively, where denotes the porosity of the porous medium. For the phase i, s i denote
the saturation, i the density and v i is the seepage velocity (the product of the saturation
by the particle velocity of the phase i). Since the
uid occupies the whole pore space, the
saturations satisfy
As a consequence, any pair of saturations in the saturation triangle may be chosen to
describe the state of the
uid.
The theory of multiphase
ow in porous media is based on the following form of Darcy's
law of force [32, 35, 4]:
@
@x
where K denotes absolute permeability of the porous medium, i 0 is the mobility of
phase i, and p i is the pressure of phase i. The mobility is usually expressed as
the ratio of the relative permeability k i and the viscosity i of phase i.
14 AZEVEDO, MARCHESIN, PLOHR, AND ZUMBRUN
The porosity and absolute permeability K are associated to the rock; we take them to be
constant. Neglecting thermal eects and compressibility, i and i are constant too, and we
can rewrite Eqs. (4.1) without i . Each relative permeability k i depends on the saturations.
Experimentally, k i increases when s i increases, and the relative permeabilities never vanish
simultaneously.
Let us denote the dierence between the pressures in phases i and j (i 6= j) by
This pressure dierence, called the capillary pressure, is measured experimentally as a function
of the saturations. Dene the total mobility and the fractional
ow functions f i
by
Of course,
1. Introducing the total seepage velocity
using algebraic
manipulation, we can write that
@
@x
Adding and subtracting v i in the last equation and noting that @
we see that
@
@x
Therefore, by Eqs. (4.1) and (4.7), the equations governing the
ow are
@
@x Kw [f g
@
@
wo
@
@x K g [f w
@
@
@
@t
@x
@
ow
@
Summing Eqs. (4.1), we nd @
so that v is a function of t alone. (This simplication
occurs only for
ow in one spatial dimension. In general, we obtain an elliptic equation for the
pressure, with coecients depending on the saturations.) Assuming that v never vanishes, it
is possible to change the variable t so that v is constant. We do not consider the case v(t) 0
because in this case the system (4.8) does not contain the terms related to transport. As v
is nonzero, we can set removing v, , and K from
system (4.8). For simplicity of notation, we drop the tildes.
Of course any one of the equations in the three-component system (4.8) is redundant and
the system can be reduced to a two-component system. As a result of the redundancy in
the system (4.8), one of its characteristic speeds is 0. The two other characteristic speeds
are the same for subsystem of two equations; see Ref. [30].
We will nd it convenient later to be
exible in our choice of the two saturations entering
this two-component system. Hence we will use u 1 and u 2 to denote two of the saturations
in the reduced system, where the other saturation u 3 is replaced by 1
Also it is useful to replace p 12 by . Thus we obtain the two-component system for
three-phase
ow
@
@t
@
@
@t
@
This system can be written in compact form as Eq. (1.1) with
and
and
The quantities U , F and B(U) are called the state vector,
ux function and diusion matrix,
respectively. We refer to Q(U) and P 0 (U) as balance matrix and capillary pressure Jacobian.
There are two major assumptions that we make concerning the mobilities. The rst is
that
(More generally, i vanishes when s i is below the irreducible saturation for phase i, which,
for simplicity, we take to be zero.) Because the edge
for system (4.8). Indeed, system (4.9) reduces to the scalar conservation law
@
@t
on the edge and a similar reduction occurs on the other edges. In other words,
three-phase
ow equations reduce to the Buckley-Leverett equation for two-phase
ow.
There are two characteristic speeds for system (4.9), one of which reduces to the Buckley-
characteristic speed on the edges. Our second assumption is that the other characteristic
speed is positive and strictly smaller than the Buckley-Leverett characteristic speed
near the edges. For example, on the edge
Thus we see that the characteristic speeds are f 1;1 and f 2;2 , that f 1;1 is the Buckley-Leverett
speed, and that the second assumption can be written as
4.2. Assumptions on the fractional
ow functions. In this section we present assumptions
used in the rest of the paper.
For brevity we use a subscript \; j" to denote the partial derivative with respect to u j for
2. The Jacobian matrix of F is therefore
According to Eq. (4.5),
2: (4.17)
We shall make the following assumptions concerning the fractional
ow functions: f 1 , f 2 ,
and f 3 are continuously dierentiable functions on the closed saturation triangle such that
on the open edge
f 1f 2;1 has a continuous extension to the open edge where
in the interior of near
are bounded away from zero in the interior of near
Each of the assumptions above actually represents three dierent assumptions, corresponding
to a choice of two saturations u 1 and u 2 among s w , s g and s
The rst part of assumption (4.18) corresponds to the requirement that the permeabilities
be nonnegative. By the second part of this assumption, f on the edge
so that the Jacobian F 0 is upper triangular and its eigenvalues are f 1;1 and f 2;2 . Hence
assumption (4.19) states that the eigenvalues are distinct on each open edge, and that the
fast eigenvector is parallel to the edge. Thus the three-phase
ow reduces to two-phase
ow
along each edge, with the fast eigenvalue corresponding to the Buckley-Leverett speed (see
Eq. (4.13)). In particular, f at the corner re
ecting
the immiscibility of two-phase
ow. Assumption (4.21) implies that the eigenvalues are
distinct in the interior of near each corner. (Necessarily, the eigenvalues coincide at each
corner. Thus, the model is strictly hyperbolic near the boundary except at the corners.) This
assumption also implies that the saturations u 1 and u 2 vary in opposite directions across slow
rarefaction waves near each corner. Assumptions (4.20) and (4.22) simplify the conditions
for stability at the boundary of . We will show that these assumptions hold for certain
models employed in petroleum reservoir engineering in Sec. 6.
4.3. Assumptions on the capillary pressures. In principle, the capillary pressures p ow
and p og are experimentally measured functions of all saturations. (The third capillary pressure
is obtained as In general, the capillary pressure Jacobian (4.11)
is
We make the following assumptions:
positive denite and diagonally dominant inside ; (4.24)
on the closed edge
Assumption (4.25), which guarantees that the two-phase
ow equation (4.13) for the edge
has a positive diusion coecient, actually represents three dierent assumptions,
corresponding to a choice of two saturations u 1 and u 2 among s w , s g and s
4.4. Well-posedness. Next, we investigate the eects of capillary pressure in system (1.1),
as represented by the diusive term (B(U)U x ) x . We rst verify that this term is strictly
parabolic (i.e., the eigenvalues of B(U) have strictly positive real parts) in the interior of
the saturation triangle .
Lemma 4.2. Under assumption (4.18), the balance matrix Q(U) is symmetric and positive
denite in the interior of the saturation triangle , and det
Proof. Rewriting
it is apparent that Q(U) is symmetric and diagonally dominant, with nonnegative diagonal
entries that are positive in the interior of , by assumption (4.18). One can verify that
det so that det
Lemma 4.3. If P 0 satises assumptions (4.24) and (4.25), then the eigenvalues of the diffusion
matrix B(U) have positive real part in the interior of , whereas on the boundary of
, one eigenvalue of B(U) is zero and the other is positive.
Proof. A real matrix has eigenvalues with positive real part provided that its trace and
determinant are positive. The determinant of both Q and P 0 are non-negative, by diagonal
dominance (4.24), so that det strict inequality in the
interior of . Consulting Eq. (4.26), we nd that
by assumptions (4.24) and (4.25).
Remark. It is easily checked (see the calculations below) that the normal to is a left
eigenvector of F 0 (U) and a left null vector of B(U) on @. This is a necessary condition in
order that be invariant under the evolution of system (1.1), see Theorem 1.47, p. 200 of
[40]. However, suciency does not follow from this standard result, due to degeneracy of
B(U) on the boundary; indeed, the question of invariance in this degenerate situation seems
to be an interesting one in its own right.
5. Linear Stability Conditions and Degree
In this section we compute the degree for the fast-family line eld for three-phase
ow
models. We also nd conditions under which the Majda-Pego stability conditions (2.6) hold
near @.
5.1. Eigenvalues and eigenvectors. It proves convenient to work with the scaled Jacobian
matrix
and to introduce the following notation:
Let v s and v f denote the slow and fast eigenvalues of A, respectively. Then
a d b
c a d
We may choose the matrix of right eigenvectors to be
This a good choice for calculations near the edge where The
corresponding matrix of left eigenvectors is given by
d)
a
Using the relation d bc, we nd that the determinant of R in Eq. (5.5) 2d[a
On the other hand, for calculations near the corner where
choice for the right eigenvector matrix is
(The reason is that the projections (1; 1) r s and (1; 1) r f are positive.) The corresponding
matrix of left eigenvectors is
d)
Again by the relation d bc, the determinant of R in Eq. (5.7) is 4d[d
5.2. Degree. In this section we show that, for models satisfying the assumptions of Sec. 4.2,
the fast-family line eld has winding number 1 around a suitable contour close to the
boundary of the saturation triangle.
Lemma 5.1. For a model satisfying assumptions (4.18), (4.19), and (4.21), consider the
arc of a circle centered at leading from the edge to the edge
If the radius of the arc is suciently small, the fast-family line eld turns by angle =2
upon traversing the arc.
Proof. The projection of the eigenvector r f , as dened in Eq. (5.7), onto the vector (1; 1) is
assumption (4.21), this quantity is positive along any open arc of suciently
small radius. At the edge assumption (4.19) and assumption (4.18),
so that d = a and r which is a positive multiple of (1; . Similarly, at the
edge so that d = a and r which is a positive
multiple of (0; 1) T . Therefore r f turns by angle =2.
Denition 5.2. Let the contour be the boundary of the set of points in at least
distance from the boundary.
Proposition 5.3. Consider a model satisfying assumptions (4.18), (4.19), and (4.21). For
suciently small > 0, the degree of the fast-family line eld around is 1.
Proof. According to the preceding lemma, the fast-family line eld rotates by =2 at the
corner Taking account the changes of coordinates, the same lemma shows
that the fast-family line eld rotates by =4 at each of the two corners
traversing edges, the fast-family line eld does not rotate because it
is parallel to the edge. Therefore the line eld rotates by around .
CAPILLARY INSTABILITY IN MODELS FOR THREE-PHASE FLOW 19
5.3. Stability analysis near edges. In this section we determine a necessary and sucient
condition for a model to be linearly stable near the open edges of the saturation triangle,
in that conditions (2.6) hold with strict inequality except on the edge. Without loss of
generality, we focus on the edge The calculations presented in this section are not
applicable at the corners, so we assume that 0 <
We choose the eigenvector matrices (5.5) and (5.6). If u assumption
(4.18) and a > 0 by assumption (4.19). In particular, d = a and det Therefore,
when evaluated along the edge, the fast family eigenvectors r f and ' f are positive multiples
of (1; 0) T and (1; b=(2a)), respectively. Assumption (4.18) also entails that, when
the only nonzero entry of the balance matrix Q is Q
sequently, by denition (4.10) of the diusion matrix B, ' f B r f is a positive multiple of
along the edge We have proved the following result.
Proposition 5.4. For a model satisfying assumptions (4.18), (4.19), and (4.25), the fast-
family Majda-Pego condition holds in an open neighborhood of the open edge
The right eigenvector for the slow family is r However, we shall
see that ' s hence to verify that ' s B r s 0, we cannot simply set
As ' s is a positive multiple of (c; (a +d)), Eq. (4.11) implies that ' s Q is a positive multiple
of the vector d)). By assumption (4.20),
we may extend 1
Q, to be a continuous function at
positive multiple of ( 1
It proves useful to write this last vector as r T
nd a simplied expression for
First we note that when u
Using these results in the denition (5.9) of , we nd that
(The quantity 3;1 3;2 is the derivative of 3 with respect to u 1 with u 3 xed.) We can
now formulate a condition equivalent to the Majda-Pego condition for the slow family.
Proposition 5.5. For a model satisfying assumptions (4.18), (4.19), (4.20), and (4.25), let
let be given by Eq. (5.13). Then
r T
is a necessary and sucient condition for the model to satisfy the slow-family Majda-Pego
condition in an open neighborhood of the open edge
strict inequality when u
5.4. Stability analysis at corners. In this section we determine a necessary and sucient
condition for a model to be linearly stable near the corners of the saturation triangle, in that
conditions (2.6) hold with strict inequality except on the edges. Without loss of generality,
we focus on the corner
Condition (2.5) is equivalent to the nonnegativity of the diagonal elements of LQP 0 R.
For the eigenvector matrices R and L we choose those dened by Eqs. (5.7) and (5.8),
respectively. Using the balance matrix dened in Eq. (4.11), we see that
(det R)' s
(det R)' f
By assumption (4.21), det positive in the interior of in a neighborhood
of the corner.
Proposition 5.6. For a model satisfying assumptions (4.18), (4.22), and (4.25), the fast-
family Majda-Pego condition holds in the interior of in a neighborhood of the
corner only if
in the interior of in a neighborhood of this corner, where a, b, c, and d are given by
Eqs. (5.2) and (5.3).
Proof. The stability condition for the fast family is that (det R)' f QP 0 r f > 0. By assumption
(4.18) (and the corresponding assumption along the edge tend to
zero at the corner. In particular, we may neglect f 1 and f 2 relative to 1 in the expression
for (det R)' f Q. Moreover, by assumptions (4.18) and (4.19), a, b, c, and hence d vanish at
the corner; therefore, by assumption (4.22), f 1 (a d) is negligible relative to
d) is negligible relative to . Thus (det R)' f Q can be approximated
by d) T , the inequality
(det R)' f QP 0 r f > 0 is equivalent to f > 0 near the corner.
By an analogous argument, we can prove the following result.
Proposition 5.7. For a model satisfying assumptions (4.18), (4.22), and (4.25), the Majda-
Pego condition for the slow family holds in the interior of in a neighborhood
of the corner only if
d)
in the interior of in a neighborhood of this corner, where a, b, c, and d are given by
Eqs. (5.2) and (5.3).
5.5.
Summary
. Combining the results of this section, we have the following theorem.
Theorem 5.8. Consider a model satisfying assumptions (4.18){(4.22) and (4.25) along with
conditions (5.14), (5.17), and (5.18). For suciently small > 0, the contour of Deni-
tion 5.2 lies within the interior of MP and the degree of the fast-family line eld around
is 1.
6. A Three-Phase Flow Model
In this section, we present a model for the relative permeabilities and the capillary pressures
used in petroleum reservoir engineering and verify the assumptions of Secs. 4.2 and 4.3.
Using the results of Sec. 5, we show that the model is linearly stable near the boundary of
the saturation triangle except in an open set containing two of the corners in its closure.
6.1. Stone's Permeability model. The permeability of each of the three phases is an
experimentally measured function that depends on the saturations of the phases. Leverett
and Lewis [28] found that, for many reservoirs, kw depends only on s w and k g depends only
on s g . We make the same assumption here.
For most reservoirs, the oil permeability depends on two saturations. As there is little
experimental data concerning the three-phase region, this function is usually measured only
along two edges of the saturation triangle and then interpolated to the interior of . Restricted
to the s reduces to the relative oil-water permeability, k ow ; similarly,
edge is the relative oil-gas permeability k og . To satisfy assumption (4.12), we
require that k ow (s w ) and k og (s g ) are non-negative and that k ow (s w
proposed an interpolation
scheme for determining k throughout from k ow and k og :
ow (s w )k og (s g )
The particular model we examine is the quadratic Stone's model, which uses kw (s w
ow (s w so that
w , and k
In the notation of system (4.9), the mobilities of the quadratic Stone's model are
if we associate u 1 to s w and u 2 to s g , whereas
if we associate u 1 to s w or s g and u 2 to s
6.2. Leverett's Capillary pressure model. Based on the work of Leverett [28], Aziz
and Settari [4] propose that the the capillary pressure functions for three-phase
ow can be
taken to have the form p wo ow ow and P og are certain
monotone decreasing functions. Denoting
, we assume that
Associating u 1 to s w and u 2 to s g in system (4.9), p 13 and p 23 represent wo and p go ,
respectively. Therefore the Leverett capillary pressure Jacobian is
On the other hand, considering the oil to be one of the phases in Eq. (4.9), the Leverett
capillary pressure Jacobian takes a dierent form. Associating u 1 to s w and u 2 to s
22 AZEVEDO, MARCHESIN, PLOHR, AND ZUMBRUN
loss of generality, we have that u 3 is s g , p 13 is p . In this case,
It is easy to see that assumptions (4.24) and (4.25) of Sec. 4.3 hold for the Leverett
capillary pressures under assumption (6.5).
6.3. Verication of assumptions. We now verify the assumptions of Sec. 4.2.
Lemma 6.1. The quadratic Stone's model satises assumptions (4.18){(4.22) if we associate
u 1 to s w or s g and u 2 to s that the oil viscosity is no less than half of the
water and gas viscosities, i.e.,2 w
Proof. The mobilities are given by Eqs. (6.4). Assumption (4.18) is evidently true.
Let us verify assumption (4.19). Since
we see from Eq. (4.17) that
is nonnegative everywhere and reduces to
u2=0
on the edge On the other hand,
so that
reduces to
u2=0
In particular, f at the corner on the open edge
only if
These inequalities hold by virtue of assumption (6.8).
Now we verify assumptions (4.20), (4.21) and (4.22). Since
dividing by 2 gives
so that f 1
f 2;1 has a continuous extension to to the open edge where
In
f 2;1 has a positive limit at (0; 0). Moreover,
so that f 1
bounded away from zero in the interior of in a neighborhood of
(0; 0). Similarly,
so that f 1;2 > 0 in the interior of in a neighborhood of (0; 0). In fact, f 1
f 1;2 has a positive
limit at (0; 0).
Lemma 6.2. The quadratic Stone's model satises assumptions (4.18){(4.22) if we associate
u 1 to s w and u 2 to s g .
Proof. The mobilities are given by Eqs. (6.3). Assumption (4.18) is evidently true.
Since
we see by Eq. (6.10) that f 2;2 is nonnegative near is zero on the
edge On the other hand,
so that by Eq. (6.13), f 1;1 on u by Eq. (6.14). Therefore f 1;1 > f 2;2 on
the open edge assumption (4.19) holds.
shows that f 1
f 2;1 has a continuous extension to the edge
and tends to 2 at the corner (0; 0). Similarly, by Eq. (6.20), f 1
tends to 2 at the corner.
Thus assumptions (4.20), (4.21), and (4.22) hold.
6.4. Linear stability near edges. In this section we prove the following result.
Theorem 6.3. Assuming conditions (6.5) and (6.8), the quadratic Stone's model with the
capillary pressures satises the stability conditions (2.5) near each open edge of ,
with strict inequality away from the edges.
Proof. By Props. (5.4) and (5.5) and the results of Secs. 6.2 and 6.3, the proof reduces to
verifying condition (5.14) of Prop. (5.5).
First, we associate u 1 to s w or s g and u 2 to s o . By Eqs. (6.14), (6.11), and (6.21) the
vector r reduces to
on Eqs. (6.12) and (6.12), 1
so that
Using the capillarity matrix P 0 given in Eq. (6.7), we nd
r T
Therefore assumptions (6.5) and (6.8) imply condition (5.14).
Second, we associate u 1 to s w and u 2 to s g . By the calculations in the proof of Lemma 6.2,
and
Using the capillary pressure Jacobian (6.6), we nd that
Thus assumption (6.5) is implies condition (5.14).
6.5. Linear stability and instability at corners. In this section we investigate Majda-
Pego conditions at the corners of the saturation triangle .
Theorem 6.4. Consider a model satisfying assumptions (4.18), (4.22), and (4.25) and using
the Leverett capillary pressures. This model satises the Majda-Pego condition (2.6) for the
fast family in the interior of in neighborhoods of each corner; it also satises the Majda-
Pego condition for the slow family in the interior of in neighborhood of the corner s
Proof. First associate u 1 to s w and u 2 to s g . Then Leverett's capillarity matrix P 0 is given
by Eq. (6.6) and inequalities (5.18) and (5.17) become
d)
d) > 0; (6.31)
d)
d) > 0: (6.32)
In the interior of in a neighborhood of the corner, b and c are positive and d > jaj, so that
the coecients of
and in s
wg and f
wg are positive; thus s
wg > 0.
Now associate u 1 to s w or s g and u 2 to s
by Eq. (6.7), and the inequality (5.17) becomes
d)
c) > 0:
By the same argument as given above, the condition f
wo > 0 is satised in the interior of
in a neighborhood of the corner.
However, the Majda-Pego condition for the slow family is violated for the quadratic Stone's
model and Leverett capillary pressures, as we now show.
Theorem 6.5. Consider the quadratic Stone's model with the Leverett capillary pressures.
Assume that the oil viscosity is no less than half of the water and gas viscosities. This model
in an open set containing the corners s
in its closure.
Proof. Without loss of generality, we may associate u 1 to s w and u 2 to s Consider the set
where Eqs. (6.13), (6.10), (6.12), and (6.9),
so that in a neighborhood of the corner, the set a = 0 is a curve through the corner tangent
to the line
When
c ]((
(det R)' s
as seen from Eqs. (5.7) and (5.15). By Eqs. (6.21) and (6.19),
so that
c) as along the curve a = 0. Therefore
near the corner.
7. Conclusion
Linear stability is a desirable property. We have numerical evidence that certain three-phase
models satisfying assumptions (4.18){(4.22) are linearly stable near the boundary of
the saturation triangle and a curve satisfying the hypotheses of Theorem 4.1 can be
constructed. This is not the case for quadratic Stone's model with Leverett's capillary
pressures: as we can see from Theorem 6.5, there is an instability region near the two
vertices. On the arc with radius 0:04, the instability region corresponds to the negative
part of the lowest curve in Fig. 7.1. However this gure shows that for larger radii, the
arcs avoid the instability region. We have veried numerically that the eigenvectors wind
correctly along these arcs. Based on these facts, we have constructed curves which also
satisfy the hypotheses of Theorem 4.1. Also for the quadratic Stone's model we found
numerically regions in parameter space for which the hypotheses (H1){(H3) are satised.
Theorem 4.1 establishes the existence of a DRS point inside , which generically implies
other instabilities. (Indeed, the above discussion shows that a DRS point is to be expected
when the instability regions near the vertices are disconnected from the elliptic region.)
For some nongeneric values of parameters, the elliptic region may collapse into umbilic
points that coincide with the DRS points. The same collapse occurs for certain other three-
phase
ow models. We have no evidence of DRS-instabilities for such degenerate DRS points.
We regard the stability inequalities as providing mathematical guidelines for the construction
of adequate three-phase
ow models. Three-phase
ow uses quantities such as
26 AZEVEDO, MARCHESIN, PLOHR, AND ZUMBRUN2610140
f
Figure
7.1. Plot of ' s B r s =r 3 (multiplied by 10 3 det R) versus polar angle
for 0:08. For this gure, the viscosities are
the values of the capillary pressure derivatives,
evaluated at the corner, are
permeabilities and capillary pressures that can only be measured accurately at the boundary
of the saturation triangle. We may regard the stability conditions as constraints on
interpolation schemes for this data into the interior of state space.
Acknowledgments
We thank Beata Gundelach for careful assistance in preparing the manuscript. We also
thank Somsak Orankitjaroen for thoroughly reading the manuscript and locating errors in a
preliminary version.
--R
Multiple viscous solutions for systems of conservation laws
Nonuniqueness of nonclassical solutions of Riemann problems
Elsevier Applied Science
Conservation laws of mixed type describing three-phase ow in porous media
Cubic Li
Features of three-component
Existence and asymptotic behavior of measure-valued solutions for three-phase ow in porous media
Oscillation waves in Riemann problems inside elliptic regions for conservation laws of mixed type
On the Riemann problem for a prototype of a mixed type conservation law
On the strict hyperbolicity of the Buckley-Leverett equations for three-phase ow in a porous medium
A global formalism for nonlinear waves in conservation laws
Steady ow of gas-oil-water mixtures through unconsolidated sands
Stable viscosity matrices for system of conservation laws
Stable hyperbolic singularities for three-phase ow models in oil reservoir simulation
Fundamentals of numerical reservoir simulation
The physics of ow through porous media
The Riemann problem for a class of conservation laws of mixed type
Loss of real characteristics for models of three-phase ow in a porous medium
Admissibility criteria for propagating phase boundaries in a van der waals uid
Shock waves and reaction-diusion equations
Departamento de Matem
--TR
Conservation laws of mixed type describing three-phase flow in porous media
Loss of strict hyperbolicity of the Buckley-Leverett equations for three phase flow in a porous medium
On the strict hyperbolicity of the Buckley-Leverett equations for three-phase flow in a porous medium
Admissibility conditions for shocks in conservation laws that change type
Oscillation waves in Riemann problems inside elliptic regions for conservation laws of mixed type
Nonuniqueness of solutions of Riemann problems
On the oscillatory solutions in hyperbolic conservation laws
Fundamentals of Numerical Reservoir Simulation
Nonexistence of Riemann solutions for a quadratic model deriving from petroleum engineering
--CTR
E. Abreu , J. Douglas, Jr. , F. Furtado , D. Marchesin , F. Pereira, Three-phase immiscible displacement in heterogeneous petroleum reservoirs, Mathematics and Computers in Simulation, v.73 n.1, p.2-20, 6 November 2006 | capillary pressure instability;nonunique Riemann solution;flow in porous media |
634972 | Automated techniques for provably safe mobile code. | We present a general framework for provably safe mobile code. It relies on a formal definition of a safety policy and explicit evidence for compliance with this policy which is attached to a binary. Concrete realizations of this framework are proof-carrying code, where the evidence for safety is a formal proof generated by a certifying compiler, and typed assembly language, where the evidence for safety is given via type annotations propagated throughout the compilation process in typed intermediate languages. Validity of the evidence is established via a small trusted type checker, either directly on the binary or indirectly on proof representations in a logical framework. | Introduction
Integrating software components to form a reliable system is a long-standing
fundamental problem in computer science. The problem manifests itself in
numerous guises:
(1) How can we dynamically add services to an operating system without
compromising its integrity?
(2) How can we exploit existing software components when building a new
application?
(3) How can we support the safe exchange of programs in an untrusted environment
(4) How can we replace components in a running system without disrupting
its operation?
These are all questions of modularity. We wish to treat software components
as "black boxes" that can be safely integrated into a larger system without
fear that their use will compromise, maliciously or otherwise, the integrity of
the composite system. Put in other terms, we wish to ensure that the behavior
of a system remains predictable even after the addition of new components.
Three main techniques have been proposed to solve the problem of safe component
(1) Run-time checking. Untrusted components are monitored at execution
time to ensure that their interactions with other components are strictly
limited. Typical techniques include isolation in separate hardware address
spaces and software fault isolation [1]. These methods impose serious
performance penalties in the interest of safety. Moreover, there is often a
large semantic gap between the low-level properties that are guaranteed
by checking (e.g., address space isolation) and the high-level properties
that are required (e.g., black box abstraction).
(2) Source-language enforcement. All components are required to be
written in a designated language that is known, or assumed, to ensure
"black box" abstraction. These techniques suffer from the requirement
that all components be written in a designated, safe language, a restriction
that is the more onerous for lack of widely-used safe languages. More-
over, one must assume not only that the language is properly defined, but
also that its implementation is correct, which, in practice, is never the
case.
(3) Personal authority. No attempt is made to enforce safety, rather the
component is underwritten by a person or company willing to underwrite
its safety. Digital signature schemes may be used to authenticate the
underwritten code. In practice few, if any, entities are willing to make
assurances for the correctness of their code.
What has been missing until now is a careful analysis of what is meant by safe
code exchange, rather than yet another proposal for how one might achieve a
vaguely-defined notion of safe integration. Our contention is that safe component
integration is fundamentally a matter of proof. To integrate a component
into a larger system, the code recipient wishes to know that the component is
suitably well-behaved - that is, compliant with a specified safety policy. In
other words, it must be apparent that the component satisfies a safety specification
that governs its run-time behavior. Checking compliance with such a
safety specification is a form of program verification in which we seek to prove
that the program complies with the given safety policy.
When viewed as a matter of verification, the question arises as to who (the code
producer or the code recipient) should be responsible for checking compliance
with the safety policy. The problem with familiar methods is that they impose
the burden on the recipient. The code producer insists that the recipient employ
run-time checks, or comply with the producer's linguistic restrictions, or
simply trust the producer to do the right thing. But, we argue, this is exactly
the wrong way around. To maximize flexibility we wish to exploit components
from many different sources; it is unreasonable to expect that a code recipient
be willing to comply with the strictures of each of many disparate methods.
Rather, we argue, it is the responsibility of the code producer to demonstrate
safety. It is (presumably) in the producer's interest for the recipient to use its
code. Moreover, it is the producer's responsibility (current practices notwith-
standing) to underwrite the safety of its product. In our framework we shift
the burden of proof from the recipient to the producer.
Having imposed the burden of proof on the producer, how is the consumer
to know that the required obligations have been fulfilled? One method is to
rely on trust - the producer signs the binary, affirming the safety of the
component. This suffers from the obvious weakness that the recipient must
trust not only the producer's integrity, but also must trust the tools that
the producer used to verify the safety of the component. Even with the best
intentions, it is unlikely that the methods are foolproof. Consequently, few
producers are likely to make such a warrant, and few consumers are likely to
rely on the code they receive. A much better method is one that we propose
here: require the producer to provide a formal representation of the proof that
the code is compliant with the safety policy. After all, if the producer did carry
out such a proof, it can easily supply the proof to the consumer. Moreover,
the recipient can use its own tools to check the validity of the proof to ensure
that it really is a genuine proof that the given code complies with the safety
specification. Importantly, it is much easier to check a proof than it is to find
a proof. Therefore the code recipient need only trust its own proof checker,
which is, if the method is to be effective, much simpler than the tools required
to find the proof in the first place.
The message of this paper is that this approach can, in fact, be made to work
in practice. We are exploring two related techniques for implementing our
approach to safe component exchange: proof-carrying code and typed object
code. In both cases mobile code is annotated with a formal warrant of its
safety, which can be easily checked by the code recipient. To produce such a
warrant, we are exploring the construction of certifying compilers that produce
suitably-annotated object code. Such a compiler could be used by a code
producer to generate certified object code. Two points should be kept in mind
when reading this paper:
(1) The tools and techniques of logic, type theory, and semantics are indispensable
(2) These methods have been implemented and are available today.
The first component in a system for safe mobile code is the safety infrastruc-
ture. The safety infrastructure is the piece of the system that actually ensures
the safety of mobile code before execution. It forms the trusted computing
base of the system, meaning that all consumers of mobile code install it and
depend on it, and therefore it must work properly. Any defect in the trusted
computing base opens a possible security hole in the system.
A fundamental concern in the design of the trusted computing base is that it
be small and simple. Large and/or complicated code bases are very likely to
contain bugs, and those bugs are likely to result in exploitable security holes.
For us to have confidence in our safety infrastructure, its trusted components
must be small and simple enough that they are likely to be correct.
The design of the safety infrastructure consists of three parts. First, one must
define a safety policy. Second, one specifies what will be acceptable as evidence
of compliance with the safety policy. Suppliers of mobile code will then be
required also to supply evidence of compliance in an acceptable form. Third,
one must build software that is capable of automatically checking whether
purported evidence of safety is actually valid.
2.1 Safety Policies
The first task in the design of the safety infrastructure is to decide what
properties mobile code must satisfy to be considered safe. In this paper we
will consider a relatively simple safety policy, consisting of memory safety,
control-flow safety, and type safety.
(1) Memory safety is the property that a program never dereferences an
invalid pointer, never performs an unaligned memory access, and never
reads or writes any memory locations to which it has not been granted
access. This property ensures the integrity of all data not available to
the program, and also ensures that the program does not crash due to
memory accesses.
(2) Control-flow safety is the property that a program never jumps to an
address not containing valid code, and never jumps to any code to which
it has not been granted access. This property ensures that the program
does not jump to any code to which it is not allowed (e.g., low-level
system calls), and also ensures that the program does not crash due to
jumps.
(3) Type safety is the property that every operation the program performs
is performed on values of the appropriate type. Strictly speaking, this
property subsumes memory and control-flow safety (since memory accesses
and jumps are program operations), but it also makes additional
guarantees. For example, it ensures that all (allowable) system calls are
made using appropriate values, thereby ruling out attacks such as buffer
overruns on other code in the system. The additional guarantees provided
by type safety are often very expensive to obtain using dynamic means,
but the static means we discuss in this paper can provide them at no
additional cost.
Stronger safety policies are also possible, including guarantees of the integrity
of data stored on the stack [2], limits on resource consumption [3,4], and
policies specified by allowable traces of program operations [5]. However, for
policies such as these, the evidence of compliance (which we discuss in the next
section) can be more complicated, thereby requiring greater expense both to
produce and to verify that evidence, and possibly reducing confidence in the
system's correctness. Thus the choice of safety policy in a practical system
involves important trade-offs.
It is also worth observing that stronger policies are not always better if they
rule out too many programs. For example, a policy that rejects all programs
provides great safety (and is cheap to implement), but is entirely useless for
a safety infrastructure. Therefore, it is important to design safety policies to
allow as many programs as possible, while still providing sufficient safety.
2.2 Evidence of Compliance
The safety policy establishes what properties mobile programs must satisfy in
order to be permitted to execute on a host. However, it is impossible in general
for a code consumer to determine whether an arbitrary program complies with
that policy. Therefore, we require that suppliers of mobile code assist the consumer
by providing evidence that their code complies with the safety policy.
This evidence, which we may think of as a certificate of safety, is packaged
together with the mobile program and the two together are referred to as certified
code. Upon obtaining certified code, the code consumer (automatically)
verifies the validity of the evidence before executing the program code.
The second task in the design of the safety infrastructure is to decide what
form the evidence of compliance must take. This decision is made in the light
of several considerations:
(1) Since evidence of safety must be transferred over the network along with
the program code they certify, we wish the evidence to be as small as
possible in order to minimize communication overhead.
evidence must be checked before running any program code, we
desire verification of evidence to be as fast as possible. Clearly, smaller
evidence can lead to faster checking, but we can also speed evidence
verification by careful design of the form of evidence.
(3) As discussed above, the evidence verifier is an essential part of the trusted
computing base; it must work properly or there will be a potential security
hole in the system. For us to have confidence that the verifier works
properly, it must be simple, which means that the structure of the evidence
it checks must also be simple. Thus, not only is simplicity desirable
from an aesthetic point of view, but it is also essential for the system to
work.
(4) Finally, to have complete confidence that our system provides the desired
safety, we must prove with mathematical rigor that programs carrying
acceptable evidence of safety really do comply with the safety policy. This
proof is at the heart of the safety guarantees that the system provides.
For such proofs to be feasible, the structure of the evidence must be built
on mathematical foundations.
In light of these considerations, we now discuss two different forms that evidence
of compliance may take: explicit proofs, which are employed in the
Proof-Carrying Code infrastructure [6], and type annotations, which are employed
in the Typed Assembly Language infrastructure [7,2].
Explicit Proofs
The most direct way to provide evidence of safety is to provide an explicit
formal proof that the program in question complies with the safety policy.
This is the strategy employed by Proof-Carrying Code (PCC). It requires a
formal language in which safety proofs can be expressed. Any such language
should be designed according to the following criteria.
Effective Decidability: It should be efficiently decidable if a given object
represents a valid safety proof.
Compactness: Proofs should have small encodings.
Generality: The representation language should permit proofs of different
safety properties. Ideally, it should be open-ended so that new safety policies
can be developed without a change in the trusted computing base.
Simplicity: The proof representation language should be as simple as possi-
ble, since we must trust its mathematical properties and the implementation
of the proof checker.
Our approach has been to use the LF logical framework [8] to satisfy these
requirements. A logical framework is a general meta-language for the representation
of logical inferences rules and deductions. Various logics or theories can
be specified in LF at a very high level of abstraction, simply by stating valid
axioms and rules of inference. This provides generality, since we can separate
the theories required for reasoning about safety properties such as arithmetic
memory update and access from the underlying mechanism of
checking proofs. It is also simple, since it is based on a pure, dependently
typed -calculus whose properties have been deeply investigated [9,10].
Proofs in a logic designed for reasoning about safety properties are represented
as terms in LF. Checking that a proof is valid is reduced to checking that its
representation in the logical framework is well-typed. This can be carried out
effectively even for very large proof objects. Experiments in certifying compilation
[11] and decision procedures [12] yield proofs whose representation is
more than 1 MB, yet can still be checked. On the other hand, proofs in LF
are not compact without additional techniques for redundancy elimination.
Following some general techniques [13], Necula [14] has developed optimized
representations for a fragment of LF called
which is sufficient for its use in
PCC applications. The experimental results obtained so far have validated the
practicality of this proof compression technique [11] for the safety policies discussed
here. Current research [15] is aimed at extending and improving these
methods to obtain further compression without compromising the simplicity
of the trusted computing base.
Type Annotations
A second way to provide evidence of safety is using type annotations. In this
approach a typing discipline is imposed on mobile programs, and the architects
of the system prove a theorem stating that any program satisfying that
type discipline will necessarily satisfy the safety policy as well [7]. However,
determining whether a program satisfies a type discipline involves finding a
consistent type scheme for the values in the program, and such a type scheme
cannot be inferred in general. Therefore, in this approach programs are required
to include enough type annotations for the type checker to reconstruct
a consistent type scheme. Such type annotations constitute the evidence of
safety, provided they are taken in conjunction with a theorem stating that
well-typed programs comply with the safety policy.
A principal advantage of the type annotation approach over the explicit proof
approach is that the soundness of the type system can be established once and
for all. In contrast, validity of explicit proofs does not establish the soundness
of the system of proof rules, and in practice the proof rules are freely customized
to account for the safety requirements of each application. The main
drawback of type annotations is that any program that violates a type sys-
tem's invariants will not be typeable under that type system, and therefore
cannot be accepted by the safety infrastructure, even if it is actually safe.
With explicit proofs, such invariants are not built in, so it is possible to work
around cases in which they do not hold.
The idea of using types to guarantee safety is by no means new. Many modern
high-level languages (e.g., ML, Modula-3, Java) rely on a type system to ensure
that all legal programs are safe. Such languages have even been used for safety
infrastructures; for example, the SPIN operating system [16] required that
operating system extensions be written in Modula-3, thereby ensuring their
safety. The drawback to using a high-level language to ensure safety is that
programs are checked for safety before compilation, rather than after, thereby
requiring that the entire compiler be included in the trusted computing base.
As discussed above, the confidence that one can have in the safety architecture
is inversely related to the size of its trusted computing base.
The Typed Assembly Language (TAL) infrastructure resolves this problem
by employing a type safe low-level language. In TAL, a type discipline is
imposed on executable code, and therefore the program code being checked
for safety is the exact code that will be executed. There is no need to trust a
compiler, because if the compiler is faulty and generates unsafe executables,
those executables will be rejected by the type checker.
The principal exercise in developing a type system for executable code is to
isolate low-level abstractions satisfying two conditions:
ffl The abstractions should be independently type checkable; that is, to whatever
extent type checking of the abstractions depends on surrounding code
and data, it should only depend on the types of that code and data, and
not on additional information not reflected in the types.
ffl The atomic operations on the abstractions should be single machine instructions
As an example consider function calls. High-level languages usually provide a
built-in notion of functions. Functions can certainly be type checked indepen-
dently, but they are not dealt with by a single machine instruction. Rather,
function calls are processed using separate call and return instructions and
the intervening code is by no means atomic: the return address is stored in
accessible storage and can be modified or even disregarded. To satisfy the second
condition, TAL's corresponding abstraction is the code block, and code
blocks are invoked using a simple jump instruction. Functions are then composed
from code blocks by writing code blocks with an explicit extra input
containing the return address. (This decomposition corresponds to the well-known
practice in high-level language of programming in continuation-passing
style [17].) The first condition is satisfied by requiring code blocks to specify
the types of their inputs, just as functions in high-level languages specify the
types of their arguments and results. Without such specifications, it would be
impossible to check the safety of a jump without inspecting the body of the
jump's target.
For example, consider the TAL code below for computing factorial. This code
does not exhibit many of the complexities of the TAL type system, but it
serves to give the flavor of TAL programs. (More exhaustive examples appear
in Morrisett, et al. [7,2].)
codefr1:int,r2:fr1:intgg.
mov r3,1 % set up accumulator for loop
jmp loop
loop:
codefr1:int,r2:fr1:intg,r3:intg.
bz r1,done % check if done, branch if zero
mul r3,r3,r1
sub r1,r1,1
jmp loop
done:
codefr1:int,r2:fr1:intg,r3:intg.
mov r1,r3 % move accumulator to result register
jmp r2 % return to caller
In this code, the type for a code block is written fr1:- 1
indicating
that the code may be called when registers having
. The fact code block is given type fr1:int ; r2:fr1:intgg,
indicating that when fact is called, it must be given in r1 an integer (the
argument), and in r2 a code block (the return address) that when called must
be given an integer (the return value) in r1. When called, fact sets up an
accumulator register (with type int) in r3 and jumps to loop. Then loop
computes the factorial and, when finished, branches to done, which moves the
accumulator to the return address register (r1) and returns to the caller. In
the final return to the caller, the extra registers r2 and r3 are forgotten to
match the precondition on r2, which only mentions r1.
As an alternative to typed assembly language, one can also strike a compromise
between high- and low-level languages by exploiting typed intermediate
languages for safety [18]. Using typed intermediate languages enlarges the
trusted computing base, since some part of the compiler must be trusted, but
it loosens the second condition on type systems for executable code. This provides
a spectrum of possible designs, the closer an intermediate language is
to satisfying the second condition, the lesser the amount of the compiler that
needs to be trusted. Moreover, as we discuss in Section 3, typed intermediate
languages are valuable for automated certification, even if the end result is a
typed executable.
2.3 Automated Verification
Since it plays such a central role for provably safe mobile code, we now elaborate
on the mechanisms for verifying safety certificates.
Explicit Proofs
As discussed above, the Proof-Carrying Code infrastructure employs the LF
logical framework. In the terminology of logical frameworks, a judgment is an
object of knowledge which may be evident by virtue of a proof. Typical safety
properties require only a few judgments, such as the truth of a proposition in
predicate logic, or the equality of two integers.
In LF, a judgment of an object logic is represented by a type in the logical
framework, and a proof by a term. If we have a proof P for a judgment J ,
then the representation P has type J , where we write () for the representation
function. The adequacy theorem for a representation function guarantees this
property and its inverse: whenever we have a term M of type J then there is
a proof P for J . Both directions are critical, because together they mean that
we can reduce the problem of checking the validity of a proof P to verifying
that its representation P is well-typed.
So in PCC, checking compliance with a safety policy can be reduced to type
checking the representation of a safety proof in the logical framework.
But how does this technique allow for different safety policies? Since proofs
are represented as terms in LF, an inference rule is represented as a function
from the proofs of its premises to the proof of its conclusion. To represent a
complete logical system we only need to introduce one type constant for each
basic judgment and one term constant for each inference rule. The collection
of these constant declarations is called a signature. 1 So a particular safety
policy consists of a verification condition generator, which extracts a proof
obligation from a binary, and signature in LF, which expresses the valid proof
principles for the verification condition. This means that different policies can
be expressed by different signatures, and that the basic engine that verifies
evidence (the LF type checker) does not change for different policies. However,
we do have to trust the correctness of the LF signature representing a policy-
an inconsistent signature, for example, would allow arbitrary code to pass the
safety check.
Type checking in LF is syntax-directed and therefore in practice quite efficient
[13], especially if we avoid checking some information which can be
statically shown to be redundant [14]. Currently, the Touchstone compiler for
PCC discussed in Section 3.1 uses a small, efficient type checker for LF terms
written in C. Related projects on proof-carrying code [19,20] and certifying
decision procedures [12] use the Twelf implementation [21-23]. For more information
on logical frameworks, see [24].
Type Annotations
In case the safety policy is expressed in the form of typing rules, checking
compliance immediately reduces to type checking. In this case we have to
carefully design the language of annotations so that type checking is practical.
Generally, the more complicated the safety property the more annotations are
required. Once the safety property is fixed, there is a trade-off between space
and time: the more type annotations we have, the easier the type checking
problem. One extreme consists of no type annotations at all, which means
that type checking is undecidable. The other extreme is a full typing derivation
(represented, for example, in the logical framework) which is quite similar to
a proof in the PCC approach. For the safety policies we have considered so
far, it has not been difficult to find appropriate compromises between these
extremes that are both compact and permit fast type checking [25].
It is worth noting that in both cases (explicit proofs and type annotations) the
verification method is type checking. For proof-carrying code, this is always
This should not be confused with a digital signature used to certify authenticity.
type checking in the LF logical framework with some optimizations to eliminate
redundant work. For typed assembly language the algorithm for type
checking varies with the safety property that is enforced, although the basic
nature of syntax-directed code traversal remains the same.
3 Automated Certification
How are certificates of safety to be obtained? In principle we may use any
means at our disposal, without restriction or limitation. This freedom is assured
by the checkability of safety certificates - it is always possible to determine
mechanically whether or not a given certificate underwrites the safety of
a given program. Since the code recipient can always check the validity of a
safety certificate, there is no need to rely on the means by which the certificate
was produced.
Two factors determine how hard it is to construct a safety certificate for a
program:
(1) The strength of the assurances we wish to make about a program. The
stronger the assurances, the harder it is to obtain a certificate.
(2) The complexity of the programming language itself. The more low-level
the language, the harder it is to certify the safety of programs.
As a practical matter, the easier it is to construct safety certificates, the more
likely that code certification will be widely used.
The main technique we have considered for building safety certificates is to
build a certifying compiler for a safe, high-level language such as ML or Java
(or any other type-safe language, such as Ada or Modula). A certifying compiler
generates object code that is comparable (and often superior) in quality
to that of an ordinary compiler. A certifying compiler goes beyond conventional
compilation methods by augmenting the object code with a checkable
safety certificate warranting the compliance of the object code with the safety
properties of the source language. In this way we are able to exploit the safety
properties of semantically well-defined high-level languages without having to
trust the compiler itself or having to ensure the integrity of code in transit
from producer to consumer.
The key to building a certifying compiler is to propagate safety invariants
from the source language through the intermediate stages of compilation to
the final object code. This means that each compilation phase is responsible for
the preservation of these invariants from its input to its output. Moreover, to
ensure checkability of these invariants, each phase must annotate the program
with enough information for a code recipient to reconstruct the proof of these
invariants. In this way the code recipient can check the safety of the code,
without having to rely on the correctness of the compiler. In the (common)
case that the compiler contains errors, the purported safety certificate may
or may not be valid, but the recipient can detect the mistake. Since each
compilation phase can be construed as a recipient of the code produced by
the preceding stage, the compiler can check its own integrity by verifying the
claimed invariants after each stage. This has proved to be an invaluable aid
to the compiler writer [11,18].
3.1 Constructing Evidence of Safety
We have explored two main methods for propagating safety invariants during
compilation:
(1) Translation between typed intermediate languages [26]. Safety
invariants are captured by a type system for the intermediate languages
of the compiler. The type system is designed to ensure that well-typed
expressions are safe, and enough type information attached to intermediate
forms to ensure that we may mechanically check type correctness.
The typed intermediate forms are "self-certifying" in the sense that the
attached type information serves as a checkable certificate of safety.
(2) Compilation to proof-carrying code [11]. Safety invariants are directly
expressed as logical assertions about the execution behavior of
conventional intermediate code. The soundness of the logic ensures that
these assertions correctly express the required safety properties of the
code. The safety of the object code is checked by a combination of verification
condition generation and automatic theorem proving. By equipping
the theorem prover with the means to generate a formal representation
of a proof, we may generate checkable safety certificates for the object
code.
These two methods are not mutually exclusive. We are currently exploring
their integration using dependent types which allow assertions to be blended
with types in a single type-theoretic formalism. This technique is robust
and can be applied to high-level languages [27,28] as well as low-level languages
[29], thereby providing an ideal basis for their use in certifying compilers
3.2 Typed Intermediate Languages
To give a sense of how type information might be attached to intermediate
code, we give an example derived from the representation of lists. At the level
of the source language, there are two methods for creating lists:
(1) nil, which stands for the empty list;
(2) cons(h, which constructs the non-empty list with head h and tail t.
These values are assigned types according to the following rules: 2
(1) nil has type list;
(2) If h has type int and t has type list, then cons(h,t) has type list.
There are a variety of operations for manipulating lists, including the car and
cdr operations, which have the following types:
(1) If l has type list, then car(l) has type int;
(2) If l has type list, then cdr(l) has type list.
The behavior of these operations is governed by the following transitions in
an operational semantics for the language:
(1) car(cons(h,t)) reduces to h;
(2) cdr(cons(h,t)) reduces to t.
One task of the compiler is to decide on a representation of lists in memory,
and to generate code for car and cdr consistently with this representation. A
typical (if somewhat simple-minded) approach is to represent a list by
(1) A pointer to .
(2) . a tagged region of memory containing .
(3) . a pair consisting of the head and tail of the list.
The tag field distinguishes empty from non-empty lists, and the pointer identifies
the address of the node in the heap. This representation can be depicted
as the following compound term:
ptr(tag[cons](pair(h,
What is interesting is that each individual construct in this expression may
be thought of as a primitive of a typed intermediate language. Specifically,
2 For simplicity we consider only lists of integers.
(1) ptr(v) has type list if v has type [nil:void,cons:int*list]. The
bracketed expression defines the tags (nil and cons), and the type of
their associated data values (none, in the case of nil, a pair in the case
of a cons).
(2) tag[t](v) has type [t:-,.] if v has type - . In particular,
tag[cons](pair(h,t)) has type [nil:void,cons:int*list] if h has
type int and t has type list.
(3)
if l has type - l
and r has type - r
. In particular,
int*list if h has type int and t has type list.
Corresponding to this representation we may generate code for, say, car(l)
that behaves as follows:
(1) Dereference the pointer l. The value l must be a pointer because its type
is list.
(2) Check the tag of the object in the heap to ensure that it is cons. It must
be either cons or nil because the type of the dereferenced pointer is
[nil:void,cons:int*list].
(3) Extract the underlying pair and project out its first component. It must
have two components because the type of the tagged value is int*list.
When expressed formally in a typed intermediate language, the generated code
for the car operation is defined in terms of primitive operations for performing
these three steps. The safety of this code is ensured by the typing rules associated
with these operations - a type correct program cannot misinterpret
data by, for example, treating the head of a list as a floating point number
(when it is, in fact, an integer).
A type-directed compiler [26] is one that performs transformations on typed
intermediate languages, making use of type information to guide the trans-
lation, and ensuring that typing is preserved by each transformation stage.
In a type-directed compiler each compilation phase translates not only the
program code, but also its type, in such a way that the translated program
has the translated type. How far this can be pushed is the subject of ongoing
research. In the TILT compiler we are able to propagate type information
down to the RTL (register transfer language level), at which point type propagation
is abandoned. The recent development of Typed Assembly Language
(TAL) [7,2] demonstrates the feasibility of propagating type information down
to x86-like assembly code. The integration of TILT and TAL is the subject of
ongoing research.
What does the propagation of type information have to do with safety? A
well-behaved type system is one for which we can prove a soundness theorem
relating the execution behavior of a program to its type. One consequence of
the soundness theorem for the type system is that it is impossible for well-typed
programs to incur type errors, memory errors, or control errors. That is,
well-typed programs are safe. Of course not every safe program is well-typed
- typing is a sufficient condition for safety, but not a necessary one. However,
we may readily check type correctness of a program using lightweight and
well-understood methods. The technique of type-directed compilation demonstrates
that a rich variety of programs can be certified using typed intermediate
languages. Whether there are demands that cannot be met using this method
remains to be seen.
3.3 Logical Assertions and Explicit Proofs
Another approach to code certification that we are exploring [30,11] is the
use of a combination of logical assertions and explicit proofs. A certifying
compiler such as Touchstone works by augmenting intermediate code with
logical assertions tracking the types and ranges of values. Checking the validity
of these assertions is a two-step process:
(1) Verification condition generation (vcgen). The program is "symbolically
evaluated" to propagate the implications of the logical assertions through
each of the instructions in the program. This results in a set of logical
implications that must hold for the program to be considered properly
annotated.
(2) Theorem proving. Each of the implications generated during vcgen are
verified using a combination of automatic theorem proving techniques,
including constraint satisfaction procedures (such as simplex) and proof
search techniques for first-order logic.
In this form the trusted computing base must include both the vcgen procedure
and the theorem prover(s) used to check the verification conditions. In
addition, a specification of a safety policy which describes the conditions for
safe execution as well as pre- and post-conditions on all procedures supplied
by the host operating system and required of the certified code.
To reduce the size of the trusted computing base, we may regard the combination
of vcgen and theorem proving as a kind of "post-processing" phase
in which the validity of the annotated program is not only checked, but a
formal representation of the proof for the validity of the verification conditions
is attached to the code. This is achieved by using certifying theorem
provers [11,12] that not only seek to prove theorems, but also provide an explicit
representation of the proof whenever one is found. Once the proofs have
been obtained, it is much simpler to check them than it is to find them. Indeed,
only the proof checker need be integrated into the trusted computing base;
the theorem provers need not be trusted nor be protected from tampering.
To gain an understanding of what is involved here, consider the array subscript
operation in a safe language. Given an array A of length n and an integer i, the
operation sub(A,i) checks whether or not 0 - retrieves the
ith element of A. At a high level this is an atomic operation, but when compiled
into intermediate code it is defined in terms of more primitive operations along
the following lines:
if (0 != i &&
return A[i+1] /* unsafe access */
else f
. signal an error .
Note that *A refers to the length of array A. Here we assume that an integer
array is represented by a pointer to a sequence of words, the first of which
contains the array's length, and the rest of which are its contents.
Annotating this code with logical assertions, we obtain the following:
/* int i, array A */
if (0 != i &&
return A[i+1]
else f
. signal an error .
The assertion that A is an array corresponds to the invariants mentioned
above; in practice, a much lower-level type system is employed [11]. It is a
simple matter to check that the given assertions are correct in this case.
Observe that the role of the conditional test is to enable the theorem prover
to verify that the index operation A[i+1] is memory-safe - it does not stray
beyond the bounds of the array. In many cases the run-time test is redundant
because the compiler is able to prove that the run-time test must come out
true, and therefore can be eliminated. For example, if the high-level code were
a simple loop such as the following, we can expect the individual bounds checks
to be elided:
int
for (i=0; i!length(A); i++) f
sum += sub(A,i);
At the call site for sub the compiler is able to prove that 0 -
n is the length of A. Propagating this through the code for sub, we find that
the conditional test can be eliminated because the compiler can prove that
the test must always be true. This leads to the following code (after further
int
for (i=0; i!*A; i++) f
sum += A[i+1];
Given this annotation, we can now perform verification condition generation
and theorem proving to check that the required precondition on the unsafe
array subscript operation is indeed true, which ensures that the program is
safe to execute. However, rather than place this additional burden on the
programmer, we can instead attach a formal representation - of the proof of
this fact to the assertions:
int
for (i=0; i!*A; i++) f
sum += A[i+1]
The proof term - is a checkable witness to the validity of the given assertions
that can be checked by the code recipient. In practice this witness is a term
of the LF -calculus for which proof checking is simply another form of type
checking (see Section 2.3).
4 Experimental Results
As mentioned earlier, we have implemented several systems to test and demonstrate
the ideas of certified code, typed intermediate languages, certifying com-
pilers, and certifying theorem provers. The results of our experiments with
these systems confirm several important claims about the general framework
for safety certification of code that we have presented in this paper.
(1) Approaches to certified code such as PCC and TAL allow highly optimized
code to be verified for safety. This means that few if any compromises
need to be made between high performance and safety.
(2) The various approaches to certifying compilers that we have explored,
such as typed intermediate languages and logical assertions, can be scaled
x
vs.
"GNU
gcc
GNU gcc -O4 2.33 3.82 3.51 2.97 2.44 2.62 5.50 2.92 3.17
DEC cc -O4 2.92 3.68 3.52 2.79 2.44 2.76 6.88 11.52 3.92
Cert Comp 2.64 3.89 3.52 3.86 1.93 2.20 4.00 9.16 3.48
blur sharpen qsort simplex kmp unpack bcopy edge GMEAN1000.03000.05000.0Time
(ms)
Proof Checking (ms) 7.7 23.5 16.3 120.3 9.6 92.5 3.6 14.7
Proving (ms) 81.0 257.0 127.0 1272.0 108.0 1912.0 25.0 143.0
VC Generation (ms) 6.3 20.9 11.5 73.9 8.4 70.6 3.5 9.7
Code Generation (ms) 271.0 818.0 560.0 4340.0 348.0 1885.0 136.0 697.0
blur sharpen qsort simplex kmp unpack bcopy edge50001500025000
binary
size
(bytes)
Proof 1372 4532 2748 20308 1874 17260 620 2218
Code 320 1248 576 3792 496 2528 128 640
blur sharpen qsort simplex kmp unpack bcopy edge
Fig. 1. Comparison of generated object-code performance between the Touch-
stone, GCC, and DEC CC optimizing compilers. The height of the bars shows
the speedup of the object code relative to unoptimized code as produced by
gcc.
up to languages of realistic scale and complexity. Furthermore, they provide
an automatic means of obtaining code that is certified to hold standard
safety properties such as type safety, memory safety, and control
safety.
(3) The need to include annotations and/or proofs with the code is not an
undue burden. Furthermore, checking these certificates can be performed
quickly and reliably.
In order to support these claims and give a better feel for the practical details
in our systems, we now present some results of our experiments.
4.1 The Touchstone Certifying Compiler
Touchstone is a certifying compiler for an imperative programming language
with a C-like syntax. Although the source programs look very much like C
programs, the language compiled by Touchstone is made "safe" by having a
strong static type system, eliminating pointer arithmetic, and ensuring that
all variables are initialized. Although this language makes restrictions on C,
it is still a rich and powerful language in the sense of allowing recursive pro-
cedures, aliased variables, switch statements, and dynamically allocated data
structures. Indeed, it is straightforward to translate many practical C source
programs into the language compiled by Touchstone [31].
Given such a source program, Touchstone generates a highly optimized native
code target program for the DEC Alpha architecture with an attached proof
of its type, memory, and control safety.
Figure
1 shows the results of a collection of benchmark programs when compiled
with Touchstone, the Gnu gcc C compiler, and the DEC cc C compiler.
The benchmark programs were obtained from standard Unix utility applica-
xSpeedup
vs.
"GNU
gcc
GNU gcc -O4 2.33 3.82 3.51 2.97 2.44 2.62 5.50 2.92 3.17
DEC cc -O4 2.92 3.68 3.52 2.79 2.44 2.76 6.88 11.52 3.92
Cert Comp 2.64 3.89 3.52 3.86 1.93 2.20 4.00 9.16 3.48
blur sharpen qsort simplex kmp unpack bcopy edge GMEAN1000.03000.05000.0Time
(ms)
Proof Checking (ms) 7.7 23.5 16.3 120.3 9.6 92.5 3.6 14.7
Proving (ms) 81.0 257.0 127.0 1272.0 108.0 1912.0 25.0 143.0
VC Generation (ms) 6.3 20.9 11.5 73.9 8.4 70.6 3.5 9.7
Code Generation (ms) 271.0 818.0 560.0 4340.0 348.0 1885.0 136.0 697.0
blur sharpen qsort simplex kmp unpack bcopy edge50001500025000
binary
size
(bytes)
Proof 1372 4532 2748 20308 1874 17260 620 2218
Code 320 1248 576 3792 496 2528 128 640
blur sharpen qsort simplex kmp unpack bcopy edge
Fig. 2. Breakdown of time required to generated the proof-carrying code binaries
tions (such as the xv and gzip programs) and then edited in a completely
straightforward way to replace uses of pointer arithmetic with array-indexing
syntax. (Recall that the C-like language compiled by Touchstone does not
support pointer arithmetic.) The bars in the figure were generated by first
compiling each program with the Gnu gcc compiler with all optimizations
turned off. Then, Touchstone, Gnu gcc, and DEC cc were used to compile the
programs with all optimizations turned on. The bars in the figure show the
relative speed improvements produced by each optimizing compiler relative to
the unoptimized code.
The figure shows that the Touchstone compiler generates object code which
is comparable in speed to that produced by the gcc and cc compilers, and
in fact is superior to gcc overall. This result is particularly surprising when
one considers that Touchstone is obligated to guarantee that all array accesses
and pointer dereferences are safe (that is, Touchstone must sometimes perform
array-bounds and null-pointer checks), whereas the gcc and cc compilers do
not do this. In fact, Touchstone is able to optimize away almost all array-bounds
and null-pointer checks, and generates proofs that can convince any
code recipient that all array and pointer accesses are still safe.
In
Figure
2 we provide a breakdown of the time required to compile each
benchmark program into a PCC binary. Each bar in the figure is divided into
four parts. The bottom-most part shows the "conventional" compile time. This
is the time required to generate the DEC Alpha assembly code plus invariant
annotations required by the underlying PCC system. Because Touchstone is a
highly aggressive optimizing compiler, it is a bit slower than typical compilers.
However, on average it is comparable in compiling times to the DEC cc compiler
with all optimizations enabled. The second part shows the time required
to generate the verification conditions. Finally, the third and fourth parts show
the times required for proof generation and proof checking, respectively.
One can see that very little time is required for the verification-condition
generation and proof checking. This is important because it is these two steps
that must also be performed by any recipient of the generated code. The fact
that these two parts are so small is an indication that the code recipient in
fact has very little work to do.
Early measurements with the Touchstone compiler showed that the proofs
were about 2 to 4 times larger than the code size [31]. Since the time that
those experimental results were obtained, we have made considerable progress
on reducing the size of the proofs, without increasing the time or effort required
to check them. These reductions lead to proof sizes on the order of 10% to
40% of the size of the code. In addition, we have been experimenting with a
new representation (which we refer to as the "oracle string" representation)
which, for the types of programs described here, further reduces proof sizes
to be consistently less than 5% of the code size, at the cost of making proof
checking about 50% slower. We hope to be able to describe these techniques
and show their effects in a future report.
4.2 The Cedilla Systems Special J Compiler
The experimental results shown above are admittedly less than convincing,
due to the relatively small size of the test programs. Recently, however, we
have "spun off" a commercial enterprise to build an industrial-strength implementation
of a proof-carrying code system. This enterprise, called Cedilla
Systems Incorporated, is essentially an experiment in technology transfer, in
the sense that it is attempting to take ideas and results directly out of the
laboratory and into commercial practice. Cedilla Systems has shown that the
ideas presented in this paper can be scaled up to full-scale languages. This
is shown most clearly in an optimizing native-code compiler for the full Java
programming language, called Special J [32], which successfully compiles over
300 "real-world" Java applications, including rather large ones such as Sun's
StarOffice application suite and their HotJava web browser.
The operation of Special J is similar to Touchstone, in that Special J produces
optimized target code annotated with invariants that make it possible
to construct a proof of safety. A verification-condition generator is then used
to extract a verification condition, and a certifying theorem prover generates
the proof which is attached to the target code.
To see a simple example of this process, consider the following Java program:
public class Bcopy1 f
public static void bcopy(int[] src,
f
int
int
This source program is compiled by Special J into the target program for
the Intel x86 architecture shown in Figure 3. Included in this target program
are numerous data structures to support Java's object model and run-time
system. The core of this output, however, is the native code for the bcopy
method shown above.
This code is largely conventional except for the insertion of several invariants,
each of which is marked with a special "ANN " macro. These annotations are
"hints" from the compiler that help the automatic proof generator do its job.
They do not generate code, and they do not constrain the object code in any
way. However, they serve an important engineering purpose, as we will now
describe.
The ANN LOCALS annotation simply says that the compiled method uses three
locals. In this case, the register allocator did not need any spill space on the
stack, so the only locals are the two formal parameters and the return address.
This hint is useful for proving memory safety. The prover could, in principle,
analyze the code itself to reverse-engineer this information; but it is much
easier for the compiler to communicate what it already knows. Since one of
our engineering goals is to simplify as much as possible the size of the trusted
computing base, it is better to have the compiler generate this information,
leaving only the checking problem to the PCC infrastructure.
The ANN UNREACHABLE annotations come from the fact that the safety policy
specifies that array accesses must always be in bounds and null pointers must
never be dereferenced. In Java, such failures result in run-time exceptions, but
ANN LOCALS( bcopy 6arrays6Bcopy1AIAI,
.globl bcopy 6arrays6Bcopy1AIAI
bcopy 6arrays6Bcopy1AIAI:
cmpl $0, 4(%esp) ;src==null?
je L6
movl 4(%esp), %ebx
movl 4(%ebx), %ecx
testl %ecx, %ecx ;l==0?
jg L22
ret
L22:
xorl %edx, %edx ;initialize i
cmpl $0, 8(%esp) ;dst==null?
je L6
movl 8(%esp), %eax
movl 4(%eax), %esi ;dst.length
L7:
ANN INV(ANN DOM LOOP,
(/" (csubneq eax
(/" (csubb edx ecx)
(of rm mem)))) LF,
cmpl %esi, %edx ;i!dst.length?
movl 8(%ebx, %edx, 4), %edi ;src[i]
movl %edi, 8(%eax, %edx,
incl %edx ;i++
cmpl %ecx, %edx ;i!l?
jl L7
ret
ANN INV(ANN DOM LOOP,
LF true LF,
ret
L13:
call Jv ThrowBadArrayIndex
ANN UNREACHABLE
nop
call Jv ThrowNullPointer
ANN UNREACHABLE
nop
Fig. 3. Special J output code
the safety policy in our example requires a proof that these exceptions will
never be thrown. Therefore, the compiler points out places that must never be
reached during execution so that the proof generator does not need to reverse-engineer
where the source-code array accesses and pointer dereferences ended
up in the binary.
The first ANN INV annotation is by far the most interesting of all the annota-
tions. Note that the Special J compiler has optimized the tight loop:
ffl Both required null checks are hoisted. (Note that the null check on dst
cannot be hoisted before the loop entry because the loop may never be
entered at all; but it can be hoisted to the first iteration.)
ffl The bounds check on src is hoisted. (Note that hoisting the bounds check
on dst would be a more exotic optimization, because in the case that dst
is not long enough, the loop must copy as far as it can and then throw an
exception.)
The proof generator must still prove memory safety, so it must prove that
inside the loop there are no null-pointer dereferences or out-of-bounds memory
accesses. Essentially, the proof generator needs to go through the same
reasoning that the compiler went through when it hoisted those checks outside
the loop. Therefore, to help the proof generator, the compiler outputs the
relevant loop invariants that it discovered while performing the code-hoisting
optimizations. In this case, it discovered that:
ffl src (in register ebx) is not null: (csubneq ebx 0).
ffl dst (in register eax) is not null: (csubneq eax 0).
The "csub" prefix denotes the result of a Pentium comparison. Other things
in the loop invariant specify:
ffl which registers are modified in the loop: RB(.),
ffl that memory safety is a loop invariant: (of rm mem)
Pseudo-register rm denotes the computer's memory, and (of rm mem) means
that no unsafe operations have been performed on the memory.
After this target code is generated by Special J, the Cedilla Systems proof
generator reads it and outputs a proof that the code satisfies the safety policy.
The first step to doing this is to generate a logical predicate, called a verification
condition (or simply VC), whose logical validity implies the safety of
the code. It is important that the same VC be used by both the producer and
the recipient of the code, so that the recipient can guarantee that the "right"
safety proof is provided, as opposed to a proof of some unrelated or irrelevant
property.
As we explained earlier, both the proofs and the verification conditions are
expressed in a language called the Logical Framework (LF). Space prevents us
from including the entire VC for our bcopy example. However, the following
excerpt illustrates the main points. (Note that X0 is the dst parameter, X1 is
the src parameter, X2 is a pseudo-register representing the current state of
the heap, and X3 is the variable i.
(=? (csubb X3 (sel4 X2 (add X1 4)))
(=? (csubneq X0
(=? (csubneq X1
(=? (csubb X3 (sel4 X2 (add X0 4)))
(/" (saferd4
(add X1 (add (imul X3
(/" (safewr4
(add X0 (add (imul X3
(add X1 (add (imul X3
(/"
(=? (csublt (add X3 1)
(sel4 X2 (add X1 4)))
(/" (csubneq X1
(/" (csubneq X0
(/" (csubb (add X3 1)
(sel4 X2 (add X1 4)))
This excerpt of the VC says that, given the loop-invariant assumptions
ffl (csubb X3 (sel4 X2 (add X1 4))) (i.e., src[i] is in bounds),
(i.e., dst is non-null), and
(i.e., src is non-null),
and given the bounds check that was emitted for dst:
as well as some additional assumptions outside the loop (not shown in this
snippet), proofs are required to establish the safety of the read of the src array
and the write to the dst array. Furthermore, given the additional loop-entry
condition
proofs are required to reestablish the loop invariants.
Here, X0 corresponds to eax (dst in the source), X1 to ebx (src in the source),
X2 to rm (the memory pseudo-register), and X3 to edx (i in the source). Note
that src.length is (sel4 X2 (add X1 4)), because the length is stored at
byte-offset 4 in an array object. The safety policy, and hence the VC, specifies
and enforces these requirements on data-structure layout.
The proof generator reads the VC and outputs a proof of it. A tiny excerpt of
this proof is shown below:
([ASS10: pf (csubb X3
(sel4 X2 (add X1 4)))]
([ASS12: pf (csubneq X1 0)]
([ASS13: pf (csubb X3
(add X0 4)))]
(andi
szint
(below1 ASS10)))
The proofs are shown here in a concrete syntax for LF developed for the
Elf system [22,21]. In this very small snippet of the proof, one can see that
assumptions (marked with the "ASS." identifiers) are labeled and then used
in the body of the proof. Logical inference rules such as "impi", which in this
case stands for the "implication-introduction" rule, are specified declaratively
in the LF language, and included with the PCC system as part of the definition
of the safety policy.
Finally, a binary encoding of the proof is made and attached to the target code.
The proof is included in the data segment of a standard binary in the COFF
format. In this case, the proof takes up 7.1% of the total object file. We note
that we currently use an unoptimized binary encoding of the proof in which
all proof tokens are 16 bits long. Huffman encoding produces an average token
size of 3.5 bits, and so a Huffman-encoded binary is expected to be about 22%
of the size of a non-Huffman-encoded binary. In this case, that would make
the size of the proof approximately 45 bytes, or less than 2% of the object
file. While Huffman encoding would indeed be an effective means of reducing
the size of proofs, we have found that other representations such as "oracle
strings" do an even better job, without incurring the cost of decompression. In
the case of the current example, the oracle string representation of the proof
requires less than 6 bytes. We hope to report in detail on this representation
in a future report.
5 Conclusion and Future Work
We have presented a general framework for the safety certification of code. It
relies on the formal definition of a safety policy and explicit evidence for compliance
attached to mobile code. This evidence may take the form of formal
safety proofs (in proof-carrying code) or type annotations (in typed assembly
language). In both cases one can establish with mathematical rigor that
certified code is tamper-proof and can be executed safely without additional
run-time checks or operating system protection boundaries. Experience with
the approaches has shown the overhead to be acceptable in practice, both
in the time to validate the certificate and the space to represent it, using
advanced techniques from logical frameworks and type theory.
We also sketched how certificates can be obtained automatically through the
use of certifying compilers and theorem provers. The approach of typed intermediate
languages propagates safety properties which are guaranteed for
the high-level source language throughout the compilation process down to
low-level code. Safety remains verifiable at each layer through type-checking.
A certifying compiler such as Touchstone uses logical assertions throughout
compilation in a similar manner, except that the validity of the logical assertions
must be assured by theorem proving. This is practical for the class of
safety policies considered here, since the compiler can provide the information
necessary to guarantee that a proof can always be found. Finally, a certifying
theorem prover does not need to be part of the trusted computing base since
it produces explicit proof terms which can be checked independently by an
implementation of a logical framework.
The key technology underlying our approaches to safety is type theory as
used in modern programming language design and implementation. The idea
that type systems guarantee program safety and modularity for high-level
languages is an old one. We see our main contribution in demonstrating in
practical, working systems such as Touchstone, the TILT compiler, and the
Twelf logical framework, that techniques from type theory can equally be
applied to intermediate and low-level languages down to machine code in order
to support provably safe mobile code for which certificates can be generated
automatically.
--R
Efficient software-based fault isolation
ACM Symposium on Principles of Programming Languages
A type system for expressive security policies
Safe kernel extensions without run-time checking
From System F to typed assembly language
A framework for defining logics
On equivalence and canonical forms in the LF type theory
Generating proofs from a decision procedure
An empirical study of the runtime behavior of higher-order logic programs
Efficient representation and validation of logical proofs
Algorithms for equality and unification in the presence of notational definitions
Definitional interpreters for higher-order programming languages
A semantic model of types and machine instructions for proof-carrying code
System description: Twelf - a meta-logical framework for deductive systems
Elf: A meta-language for deductive systems
Logic programming in the LF logical framework
A realistic typed assembly language
TIL: A type-directed optimizing compiler for ML
Eliminating array bound checking through dependent types
Dependent types in practical programming
A dependently typed assembly language
The design and implementation of a certifying compiler
A certifying compiler for Java
--TR
Logic programming in the LF logical framework
A framework for defining logics
Efficient software-based fault isolation
Extensibility safety and performance in the SPIN operating system
Safe kernel extensions without run-time checking
Proof-carrying code
From system F to typed assembly language
Eliminating array bound checking through dependent types
The design and implementation of a certifying compiler
Dependent types in practical programming
Proof-carrying authentication
Resource bound certification
A semantic model of types and machine instructions for proof-carrying code
A type system for expressive security policies
A certifying compiler for Java
Algorithms for Equality and Unification in the Presence of Notational Definitions
Stack-Based Typed Assembly Language
Elf
System Description
Logical frameworks
Efficient Representation and Validation of Proofs
Definitional interpreters for higher-order programming languages
A Dependently Typed Assembly Language
Compiling with proofs
Higher-order rewriting with dependent types (lambda calculus)
--CTR
Andr Pang , Don Stewart , Sean Seefried , Manuel M. T. Chakravarty, Plugging Haskell in, Proceedings of the 2004 ACM SIGPLAN workshop on Haskell, September 22-22, 2004, Snowbird, Utah, USA
Mike Jochen , Anteneh Addis Anteneh , Lori L. Pollock , Lisa M. Marvel, Enabling control over adaptive program transformation for dynamically evolving mobile software validation, ACM SIGSOFT Software Engineering Notes, v.30 n.4, July 2005 | certifying compilers;type systems;typed assembly language;type safety;proof-carrying code |
635014 | Abstracting soft constraints. | Soft constraints are very flexible and expressive. However, they are also very complex to handle. For this reason, it may be reasonable in several cases to pass to an abstract version of a given soft constraint problem, and then to bring some useful information from the abstract problem to the concrete one. This will hopefully make the search for a solution, or for an optimal solution, of the concrete problem, faster.In this paper we propose an abstraction scheme for soft constraint problems and we study its main properties. We show that processing the abstracted version of a soft constraint problem can help us in finding good approximations of the optimal solutions, or also in obtaining information that can make the subsequent search for the best solution easier.We also show how the abstraction scheme can be used to devise new hybrid algorithms for solving soft constraint problems, and also to import constraint propagation algorithms from the abstract scenario to the concrete one. This may be useful when we don't have any (or any efficient) propagation algorithm in the concrete setting. | Introduction
Classical constraint satisfaction problems (CSPs) [18] are a very convenient and
expressive formalism for many real-life problems, like scheduling, resource al-
location, vehicle routing, timetabling, and many others [23]. However, many of
these problems are often more faithfully represented as soft constraint satisfaction
problems (SCSPs), which are just like classical CSPs except that each
assignment of values to variables in the constraints is associated to an element
taken from a partially ordered set. These elements can then be interpreted as
levels of preference, or costs, or levels of certainty, or many other criteria.
There are many formalizations of soft constraint problems. In this paper
we consider the one based on semirings [4, 3], where the semiring species the
partially ordered set and the appropriate operation to use to combine constraints
together.
Although it is obvious that SCSPs are much more expressive than classical
CSPs, they are also more di-cult to process and to solve. Therefore, sometimes it
may be too costly to nd all, or even only one, optimal solution. Also, although
classical propagation techniques like arc-consistency [17] can be extended to
SCSPs [4], even such techniques can be too costly to be used, depending on the
size and structure of the partial order associated to the SCSP.
For these reasons, it may be reasonable to work on a simplied version of the
given problem, trying however to not loose too much information. We propose to
dene this simplied version by means of the notion of abstraction, which takes
an SCSP and returns a new one which is simpler to solve. Here, as in many
other works on abstraction [16, 15], \simpler" may mean many things, like the
fact that a certain solution algorithm nds a solution, or an optimal solution, in
a fewer number of steps, or also that the abstracted problem can be processed
by a machinery which is not available in the concrete context.
There are many formal proposals to describe the process of abstracting a
notion, be it a formula, or a problem [16], or even a classical [11] or a soft CSP
[21]. Among these, we chose to use one based on Galois insertions [5], mainly
to refer to a well-know theory, with many results and properties that can be
useful for our purposes. This made our approach compatible with the general
theory of abstraction in [16]. Then, we adapted it to work on soft constraints:
given an SCSP (the concrete one), we get an abstract SCSP by just changing
the associated semiring, and relating the two structures (the concrete and the
abstract one) via a Galois insertion. Note that this way of abstracting constraint
problems does not change the structure of the problem (the set of variables
remains the same, as well as the set of constraints), but just the semiring values
to be associated to the tuples of values for the variables in each constraint.
Once we get the abstracted version of a given problem, we propose to
1. process the abstracted version: this may mean either solving it completely,
or also applying some incomplete solver which may derive some useful information
on the abstract problem;
2. bring back to the original problem some (or possibly all) of the information
derived in the abstract context;
3. continue the solution process on the transformed problem, which is a concrete
problem equivalent to the given one.
All this process has the main aim of nding an optimal solution, or an approximation
of it, for the original SCSP, within the resource bounds we have. The
hope is that, by following the above three steps, we get to the nal goal faster
than just solving the original problem.
A deep study of the relationship between the concrete SCSP and the corresponding
abstract one allows us to prove some results which can help in deriving
useful information on the abstract problem and then take some of the derived
information back to the concrete problem. In particular, we can prove the following
{ If the abstraction satises a certain property, all optimal solutions of the
concrete SCSP are also optimal in the corresponding abstract SCSP (see
Theorem 4). Thus, in order to nd an optimal solution of the concrete prob-
lem, we could nd all the optimal solutions of the abstract problem, and
then just check their optimality on the concrete SCSP.
{ Given any optimal solution of the abstract problem, we can nd upper and
lower bounds for an optimal solution for the concrete problem (see Theorem
5). If we are satised with these bounds, we could just take the optimal
solution of the abstract problem as a reasonable approximation of an optimal
solution for the concrete problem.
{ If we apply some constraint propagation technique over the abstract problem,
say P , obtaining a new abstract problem, say P some of the information in
can be inserted into P , obtaining a new concrete problem which is closer
to its solution and thus easier to solve (see Theorem 6 and 8).
This however can be done only if the semiring operation which describes how
to combine constraints on the concrete side is idempotent (see Theorem 6).
{ If instead this operation is not idempotent, still we can bring back some information
from the abstract side. In particular, we can bring back the inconsistencies
(that is, tuples with associated the worst element of the semiring),
since we are sure that these same tuples are inconsistent also in the concrete
SCSP (see Theorem 8).
In both the last two cases, the new concrete problem is easier to solve, in the
sense, for example, that a branch-and-bound algorithm would explore a smaller
(or equal) search tree before nding an optimal solution.
The paper is organized as follows. First, in Section 2 we give the necessary
notions about soft CSPs and abstraction. Then, in Section 3 we dene our notion
of abstraction, and in Section 4 we prove properties of our abstraction scheme
and discuss some of their consequences. Finally, in Section 7 we summarize our
work and give hints about future directions.
Background
In this section we recall the main notions about soft constraints [4] and abstract
interpretation [5], that will be useful for the developments and results of this
paper.
2.1 Soft constraints
In the literature there are many formalizations of the concept of soft constraints
[21, 8, 13, 10]. Here we refer to the one described in [4, 3], which however can be
shown to generalize and express many of the others [4, 2]. In a few words, a soft
constraint is just a classical constraint where each instantiation of its variables
has an associated value from a partially ordered set. Combining constraints will
then have to take into account such additional values, and thus the formalism
has also to provide suitable operations for combination () and comparison (+)
of tuples of values and constraints. This is why this formalization is based on
the concept of semiring, which is just a set plus two operations.
Denition 1 (semirings and c-semirings). A semiring is a tuple hA;
such
{ A is a set and 0; 1 2 A;
{ + is commutative, associative and 0 is its unit element;
{ is associative, distributes over +, 1 is its unit element and 0 is its absorbing
element.
A c-semiring is a semiring is idempotent with 1 as
its absorbing element and is commutative.
Let us consider the relation S over A such that a S b i a
it is possible to prove that (see [4]):
{ S is a partial
{ and are monotone on S ;
{ 0 is its minimum and 1 its maximum;
{ lattice and, for all a; b 2 A, a
Moreover, if is idempotent, then hA; S i is a complete distributive lattice and
is its glb. Informally, the relation S gives us a way to compare (some of the)
tuples of values and constraints. In fact, when we have a S b, we will say that
b is better than a.
Denition 2 (constraints). Given a c-semiring
set D (the domain of the variables), and an ordered set of variables V , a constraint
is a pair hdef; coni where con V and A.
Therefore, a constraint species a set of variables (the ones in con), and assigns
to each tuple of values of D of these variables an element of the semiring set
A. This element can then be interpreted in several ways: as a level of preference,
or as a cost, or as a probability, etc. The correct way to interpret such elements
depends on the choice of the semiring operations.
Constraints can be compared by looking at the semiring values associated to
the same tuples: Consider two constraints c
with (t). The relation
vS is a partial order. In the following we will also use the obvious extension
of this relation to sets of constraints, and also to problems (seen as sets of con-
straints). Therefore, given two SCSPs P 1 and P 2 with the same graph topology,
we will write P 1 vS P 2 if, for each constraint c 1 in P and the corresponding
constraint c 2 in P 2 , we have that c 1 vS c 2 .
Denition 3 (soft constraint problem). A soft constraint satisfaction problem
(SCSP) is a pair hC; coni where con V and C is a set of constraints.
Note that a classical CSP is a SCSP where the chosen c-semiring is:
Fuzzy CSPs [8, 19, 20] can instead be modeled in the SCSP framework by
choosing the c-semiring:
Figure
1 shows a fuzzy CSP. Variables are inside circles, constraints are represented
by undirected arcs, and semiring values are written to the right of the
corresponding tuples. Here we assume that the domain D of the variables contains
only elements a and b.
. 0.9
b . 0.5
a . 0.9
b . 0.1
aa . 0.8
ab . 0.2
ba . 0
bb . 0
Fig. 1. A fuzzy CSP.
Denition 4 (combination). Given two constraints c
combination cc 2 is the constraint hdef; coni dened
by
. The combina-
tion
operator
can be straightforward extended also to sets of constraints: when
applied to a set of constraints C, we will write
In words, combining two constraints means building a new constraint involving
all the variables of the original ones, and which associates to each tuple of
domain values for such variables a semiring element which is obtained by multiplying
the elements associated by the original constraints to the appropriate
subtuples.
Using the properties of and +, it is easy to prove that:
{
is associative, commutative, and monotone over vS ;
{ if is
idempotent,
is idempotent as well.
Denition 5 (projection). Given a constraint c = hdef; coni and a subset I
of V , the projection of c over I, written c I , is the constraint
t=t# con
I\con =t 0 def(t).
Informally, projecting means eliminating some variables. This is done by
associating to each tuple over the remaining variables a semiring element which is
the sum of the elements associated by the original constraint to all the extensions
of this tuple over the eliminated variables.
Denition 6 (solution). The solution of a SCSP problem is the
constraint
That is, to obtain the solution of an SCSP, we combine all constraints, and
then project over the variables in con. In this way we get the constraint over con
which is \induced" by the entire SCSP.
For example, each solution of the fuzzy CSP of Figure 1 consists of a pair
of domain values (that is, a domain value for each of the two variables) and
an associated semiring element (here we assume that con contains all variables).
Y we mean the projection of tuple t, which is dened over the set of variables
X, over the set of variables Y X.
Such an element is obtained by looking at the smallest value for all the subtuples
(as many as the constraints) forming the pair. For example, for tuple ha; ai (that
we have to compute the minimum between 0:9 (which is the value
(which is the value for (which is the value
for Hence, the resulting value for this tuple is 0:8.
Denition 7 (optimal solution). Given an SCSP problem P , consider Sol(P
hdef; coni. An optimal solution of P is a pair ht; vi such that
there is no t 0 such that v <S def(t 0 ).
Therefore optimal solutions are solutions which have the best semiring element
among those associated to solutions. The set of optimal solutions of an
SCSP P will be written as Opt(P ).
Denition 8 (problem ordering and equivalence). Consider two problems
then they have the same solution, thus we say that they are equivalent and we
The relation vP is a preorder. Moreover,
is an equivalence relation.
SCSP problems can be solved by extending and adapting the technique usually
used for classical CSPs. For example, to nd the best solution we could
employ a branch-and-bound search algorithm (instead of the classical backtrack-
ing), and also the successfully used propagation techniques, like arc-consistency
[17], can be generalized to be used for SCSPs.
The detailed formal denition of propagation algorithms (sometimes called
also local consistency algorithms) for SCSPs can be found in [4]. For the purpose
of this paper, what is important to say is that the generalization from classical
CSPs concerns the fact that, instead of deleting values or tuples, obtaining local
consistency in SCSPs means changing the semiring values associated to some
tuples or domain elements. The change always brings these values towards the
worst value of the semiring, that is, the 0. Thus, it is obvious that, given an SCSP
problem P and the problem P 0 obtained by applying some local consistency
algorithm to P , we must have Another important property of such
algorithms is that Sol(P local consistency algorithms do not
change the set of solutions.
2.2 Abstraction
Abstract interpretation [1, 5, 6] is a theory developed to reason about the relation
between two dierent semantics (the concrete and the abstract semantics). The
idea of approximating program properties by evaluating a program on a simpler
domain of descriptions of \concrete" program states goes back to the early 70's.
The inspiration was that of approximating properties from the exact (concrete)
semantics into an approximate (abstract) semantics, that explicitly exhibits a
structure (e.g., ordering) which is somehow present in the richer concrete structure
associated to program execution.
The guiding idea is to relate the concrete and the abstract interpretation of
the calculus by a pair of functions, the abstraction function and the concretization
function
, which form a Galois connection.
Let (C; v) (concrete domain) be the domain of the concrete semantics, while
(abstract domain) be the domain of the abstract semantics. The partial
order relations re
ect an approximation relation. Since in approximation theory
a partial order species the precision degree of any element in a poset, it is
obvious to assume that if is a mapping associating an abstract object in
concrete element in (C v), then the following holds: if (x) y,
then y is also a correct, although less precise, abstract approximation of x. The
same argument holds if x v
(y). Then y is also a correct approximation of x,
although x provides more accurate information than
(y). This gives rise to the
following formal denition.
Denition 9 (Galois insertion). Let (C; v) and (A; ) be two posets (the
concrete and the abstract domain). A Galois connection h;
is a pair of maps
C such that
1. and
are monotonic,
2. for each x 2 C; x v
3. for each y 2 A; (
Moreover, a Galois insertion (of A in C) h;
connection where
Property 2 is called extensivity of
. The map (
is called the lower
(upper) adjoint or abstraction (concretization) in the context of abstract interpretation
The following basic properties are satised by any Galois insertion:
1.
is injective and is surjective.
2.
is an upper closure operator in (C; v).
3. is additive and
is co-additive.
4. Upper and lower adjoints uniquely determine each other. Namely,
A
fy 2 A j x
5. is an isomorphism from (
)(C) to A, having
as its inverse.
An example of a Galois insertion can be seen in Figure 2. Here, the concrete
lattice is h[0; 1]; i, and the abstract one is hf0; 1g; i. Function maps all real
numbers in [0; 0:5] into 0, and all other integers (in (0:5; 1]) into 1. Function
maps 0 into 0:5 and 1 into 1.
One property that will be useful later relates to a precise relationship between
the ordering in the concrete lattice and that in the abstract one.
}abstract lattice
concrete lattice1
a
a
Fig. 2. A Galois insertion.
Theorem 1 (total ordering). Consider a Galois insertion from (C; v) to
is a total order, also is so.
Proof. It easy follows from the monotonicity of (that is, x v y implies (x)
(y), and from its surjectivity (that is, there is no element in A which is not the
image of some element in C via ). 2
Usually, both the concrete and the abstract lattice have some operators that
are used to dene the corresponding semantics. Most of the times it is useful,
and required, that the abstract operators show a certain relationship with the
corresponding concrete ones. This relationship is called local correctness.
Denition C be an operator over the
concrete lattice, and assume that ~
f is its abstract counterpart. Then ~
f is locally
correct w.r.t. f if 8x
Abstracting soft CSPs
Given the notions of soft constraints and abstraction, recalled in the previous
sections, we now want to show how to abstract soft constraint problems. The
main idea is very simple: we just want to pass, via the abstraction, from an SCSP
P over a certain semiring S to another SCSP ~
P over the semiring ~
S, where the
lattices associated to ~
S and S are related by a Galois insertion as shown above.
Denition 11 (abstracting SCSPs). Consider the concrete SCSP problem
semiring S, where
{
{
we dene the abstract SCSP problem ~
C; coni over the semiring ~
where
~
{ ~
c n g with ~
{ if is the lattice associated to S and ~
i the lattice associated
to ~
S, then there is a Galois insertion h;
i such that :
{ ~
is locally correct with respect to .
Notice that the kind of abstraction we consider in this paper does not change
the structure of the SCSP problem. That is, C and ~
C have the same number
of constraints, and c i and ~
involve the same variables. The only thing that is
changed by abstracting an SCSP is the semiring. Thus P and ~
P have the same
graph topology (variables and constraints), but dierent constraint denitions,
since if a certain tuple of domain values in a constraint of P has semiring value a,
then the same tuple in the same constraint of ~
P has semiring value (a). Notice
also that and
can be dened in several dierent ways, but all of them have
to satisfy the properties of the Galois insertion, from which it derives, among
others, that
Example 1. As an example, consider any SCSP over the semiring for optimiza-
tion
(where costs are represented by negative reals) and suppose we want to abstract
it onto the semiring for fuzzy reasoning
In other words, instead of computing the maximum of the sum of all costs, we
just want to compute the maximum of the minimum of all costs, and we want to
normalize the costs over [0::1]. Notice that the abstract problem is in the FCSP
class and it has an idempotent operator (which is the min). This means that
in the abstract framework we can perform local consistency over the problem
in order to nd inconsistencies. As noted above, the mapping
can be dened in dierent ways. For example one can decide
to map all the reals below some xed real x onto 0 and then to map the reals
in [x; 0] into the reals in [0; 1] by using a normalization function, for example
x .
Example 2. Another example is the abstraction from the fuzzy semiring to the
classical one:
Here function maps each element of [0; 1] into either 0 or 1. For example, one
could map all the elements in [0; x] onto 0, and all those in (x; 1] onto 1, for some
xed x.
Figure
2 represents this example with
We have dened Galois insertions between two lattices hL; S i and h ~
~
of semiring values. However, for convenience, in the following we will often use
Galois insertions between lattices of problems hPL; vS i and h ~
contains problems over the concrete semiring and ~
PL over the abstract semiring.
This does not change the meaning of our abstraction, we are just upgrading the
Galois insertion from semiring values to problems. Thus, when we will say that
~
mean that ~
P is obtained by P via the application of to all
the semiring values appearing in P .
An important property of our notion of abstraction is that the composition
of two abstractions is still an abstraction. This allows to build a complex abstraction
by dening several simpler abstractions to be composed.
Theorem 2 (abstraction composition). Consider an abstraction from the
lattice corresponding to a semiring S 1 to that corresponding to a semiring S 2 ,
denoted by the pair h 1 ;
1 i. Consider now another abstraction from the lattice
corresponding to the semiring S 2 to that corresponding to a semiring S 3 , denoted
by the pair h 2 ;
i. Then the
is an abstraction as well.
Proof. We rst have to prove that h;
the four
properties of a Galois insertion:
{ since the composition of monotone functions is again a monotone function,
we have that both and
are monotone functions;
{ given a value x from the rst abstraction we have that x v 1
Moreover, for any element y we have that y v 2
This holds
also for thus by monotonicity of
1 we have x v 1
{ a similar proof can be used for the third property;
{ the composition of two identities is still an identity .
To prove that 3 is locally correct w.r.t. 1 , it is enough to consider the
local correctness of 2 w.r.t. 1 and of 3 w.r.t. 2 , and the monotonicity of
4 Properties and advantages of the abstraction
In this section we will dene and prove several properties that hold on abstractions
of soft constraint problems. The main goal here is to point out some of the
advantages that one can have in passing through the abstracted version of a soft
constraint problem instead of solving directly the concrete version.
4.1 Relating a soft constraint problem and its abstract version
Let us consider the scheme depicted in Figure 3. Here and in the following
pictures, the left box contains the lattice of concrete problems, and the right
one the lattice of abstract problems. The partial order in each of these lattices is
shown via dashed lines. Connections between the two lattices, via the abstraction
and concretization functions, is shown via directed arrows. In the following, we
will call the concrete semiring and ~
1i the
abstract one. Thus we will always consider a Galois insertion h;
abstract problems
concrete problems
a
a
a
a
Fig. 3. The concrete and the abstract problem.
In
Figure
3, P is the starting SCSP problem. Then with the mapping we
get ~
which is an abstraction of P . By applying the mapping
to ~
we get the problem
us rst notice that these two problems (P and
((P ))) are related by a precise property, as stated by the following theorem.
Theorem 3. Given an SCSP problem P over S, we have that P vS
Proof. Immediately follows from the properties of a Galois insertion, in particular
from the fact that x S
for any x in the concrete lattice. In fact,
((P means that, for each tuple in each constraint of P , the semiring
value associated to such a tuple in P is smaller (w.r.t. S ) than the corresponding
value associated to the same tuple in
Notice that this implies that, if a tuple in
((P semiring value 0, then
it must have value 0 also in P . This holds also for the solutions, whose semiring
value is obtained by combining the semiring values of several tuples.
Corollary 1. Given an SCSP problem P over S, we have that Sol(P ) vS
Proof. We recall that Sol(P ) is just a constraint, which is obtained as
Thus the statement of this corollary comes from the monotonicity of and +.Therefore, by passing from P to
((P )), no new inconsistencies are intro-
duced: if a solution of
((P )) has value 0, then this was true also in P . However,
it is possible that some inconsistencies are forgotten (that is, they appear to be
consistent after the abstraction process).
If the abstraction preserves the semiring ordering (that is, applying the abstraction
function and then combining gives elements which are in the same
ordering as the elements obtained by combining only), then there is also an interesting
relationship between the set of optimal solutions of P and that of (P ).
In fact, if a certain tuple is optimal in P , then this same tuple is also optimal in
us rst investigate the meaning of the order-preserving property.
Denition 12 (order-preserving abstraction). Consider two sets I 1 and I 2
of concrete elements. Then an abstraction is said to be order-preserving if
~
Y
~
Y
Y
Y
x
where the products refer to the multiplicative operations of the concrete (
the abstract ( ~
semirings.
In words, this notion of order-preservation means that if we rst abstract and
then combine, or we combine only, we get the same ordering (but on dierent
semirings) among the resulting elements.
Example 3. An abstraction which is not order-preserving can be seen in Figure 4.
Here, the concrete and the abstract sets, as well as the additive operations of the
two semirings, can be seen from the picture. For the multiplicative operations,
we assume they coincide with the glb of the two semirings.
In this case, the concrete ordering is partial, while the abstract ordering is
total. Functions and are depicted in the gure by arrows going from the
concrete semiring to the abstract one and vice versa. Assume that the concrete
problem has no solution with value 1. Then all solutions with value a or b are
optimal. Suppose a solution with value b is obtained by computing
we have a = 1a. Then the abstract counterparts will have values (1) 0
1. Therefore the solution with value a,
which is optimal in the concrete problem, is not optimal anymore in the abstract
problem.a b1g
a
a
a
a
Fig. 4. An abstraction which is not order-preserving.
Example 4. The abstraction in Figure 2 is order-preserving. In fact, consider two
abstract values which are ordered, that is 0 0 1. Then
where both x and y must be greater than 0:5. Thus their concrete combination
(which is the min), say v, is always greater than 0:5. On the other hand, 0 can
be obtained by combining either two 0's (therefore the images of two elements
smaller than or equal to 0:5, whose minimum is smaller than 0:5 and thus smaller
than v), or by combining a 0 and a 1, which are images of a value greater than
0:5 and one smaller than 0:5. Also in this case, their combination (the min) is
smaller than 0:5 and thus smaller than v. Thus the order-preserving property
holds.
Example 5. Another abstraction which is not order-preserving is the one that
passes from the semiring hN [ f+1g; min; sum; 0; +1i, where we minimize the
sum of values over the naturals, to the semiring hN [ f+1g; min; max; 0; +1i,
where we minimize the maximum of values over the naturals. In words, this
abstraction maintains the same domain of the semiring, and the same additive
operation (min), but it changes the multiplicative operation (which passes from
sum to max). Notice that the semiring orderings are the opposite as those usually
used over the naturals: if i is smaller than j then j S i, thus the best element
is 0 and the worst is +1. The abstraction function is just the identity, and
also the concretization function
In this case, consider in the abstract semiring two values and the way they are
obtained by combining other two values of the abstract semiring: for example,
9). In the abstract ordering, 8 is higher than 9.
Then, let us see how the images of the combined values (the same values, since
is the identity) relate to each other: we have sum(7;
and 15 is lower than 10 in the concrete ordering. Thus the order-preserving
property does not hold.
Notice that, if we reduce the sets I 1 and I 2 to singletons, say x and y, then
the order-preserving property says that (x) ~
S (y) implies that x S y. This
means that if two abstract objects are ordered, then their concrete counterparts
have to be ordered as well, and in the same way. Of course they could never be
ordered in the opposite sense, otherwise would not be monotonic; but they
could be incomparable. Therefore, if we choose an abstraction where incomparable
objects are mapped by onto ordered objects, then we don't have the
order-preserving property. A consequence of this is that if the abstract semiring
is totally ordered, and we want an order-preserving abstraction, then the
concrete semiring must be totally ordered as well.
On the other hand, if two abstract objects are not ordered, then the corresponding
concrete objects can be ordered in any sense, or they can also be not
comparable. Notice that this restriction of the order-preserving property to singleton
sets always holds when the concrete ordering is total. In fact, in this case,
if two abstract elements are ordered in a certain way, then it is impossible that
the corresponding concrete elements are ordered in the opposite way, because,
as we said above, of the monotonicity of the function.
Theorem 4. Consider an abstraction which is order-preserving. Given an SCSP
problem P over S, we have that Opt(P ) Opt((P )).
Proof. Let us take a tuple t which is optimal in the concrete semiring S, with
value v. Then v has been obtained by multiplying the values of some subtuples.
Suppose, without loss of generality, that the number of such subtuples is two
(that is, we have two us then take the value
of this tuple in the abstract problem, that is, the abstract combination of the
abstractions of v 1 and We have to show that if v
is optimal, then also v 0 is optimal.
Suppose then that v 0 is not optimal, that is, there exists another tuple t 00
with value v 00 such that v 0 S 0 v 00 . Assume v
2 . Now let us see the
value of tuple t 00 in P . If we set v 00
have that this value is
us now compare v with v. Since v 0 ~
we get that v S
v. But this means that v is not optimal, which was our initial
assumption. Therefore v 0 has to be optimal. 2
Therefore, in case of order-preservation, the set of optimal solutions of the
abstract problem contains all the optimal solutions of the concrete problem. In
other words, it is not allowed that an optimal solution in the concrete domain becomes
non-optimal in the abstract domain. However, some non-optimal solutions
could become optimal by becoming incomparable with the optimal solutions.
Thus, if we want to nd an optimal solution of the concrete problem, we could
nd all the optimal solutions of the abstract problem, and then use them on the
concrete side to nd an optimal solution for the concrete problem. Assuming
that working on the abstract side is easier than on the concrete side, this method
could help us nd an optimal solution of the concrete problem by looking at just
a subset of tuples in the concrete problem.
Another important property, which holds for any abstraction, concerns computing
bounds that approximate an optimal solution of a concrete problem. In
any optimal solution, say t, of the abstract problem, say with value ~ v, can
be used to obtain both an upper and a lower bound of an optimum in P . In
fact, we can prove that there is an optimal solution in P with value between
(~v) and the value of t in P . Thus, if we think that approximating the optimal
value with a value within these two bounds is satisfactory, we can take t as an
approximation of an optimal solution of P .
Theorem 5. Given an SCSP problem P over S, consider an optimal solution
of (P ), say t, with semiring value ~
v in (P ) and v in P . Then there exists an
optimal solution t of P , say with value v, such that v
(~v).
Proof. By local correctness of the multiplicative operation of the abstract semir-
ing, we have that v S
(~v). Since v is the value of t in P , either t itself is
optimal in P , or there is another tuple which has value better than v, say
v. We
will now show that
v cannot be greater than
(~v).
In fact, assume by absurd that
local correctness of the
multiplicative operation of the abstract semiring, we have that (v) is smaller
than the value of t in (P ). Also, by monotonicity of , by
that ~
(v). Therefore by transitivity we obtain that ~
v is smaller than the
value of t in (P ), which is not possible because we assumed that ~ v was optimal.
Therefore there must be an optimal value between v and
(~v). 2
Thus, given a tuple t with optimal value ~
v in the abstract problem, instead
of spending time to compute an exact optimum of P , we can do the following:
{ compute
(~v), thus obtaining an upper bound of an optimum of P ;
{ compute the value of t in P , which is a lower bound of the same optimum
of
{ If we think that such bounds are close enough, we can take t as a reasonable
approximation (to be precise, a lower bound) of an optimum of P .
Notice that this theorem does not need the order-preserving property in the
abstraction, thus any abstraction can exploit its result.
4.2 Working on the abstract problem
Consider now what we can do on the abstract problem, (P ). One possibility is
to apply an abstract function ~
f , which can be, for example, a local consistency
algorithm or also a solution algorithm. In the following, we will consider functions
~
f which are always intensive, that is, which bring the given problem closer to the
bottom of the lattice. In fact, our goal is to solve an SCSP, thus going higher in
the lattice does not help in this task, since solving means combining constraints
and thus getting lower in the lattice. Also, functions ~
f will always be locally
correct with respect to any function f sol which solves the concrete problem. We
will call such a property solution-correctness. More precisely, given a problem
P with constraint set C, f sol (P ) is a new problem P 0 with the same topology
as P whose tuples have semiring values possibly lower. Let us call C 0 the set of
constraints of P 0 . Then, for any constraint c
In other
words, f sol combines all constraints of P and then projects the resulting global
constraint over each of the original constraints.
Denition 13. Given an SCSP problem P over S, consider a function ~
f on
f is solution-correct if, given any f sol which solves P , ~
f is locally
correct w.r.t. f sol .
We will also need the notion of safeness of a function, which just means that
it maintains all the solutions.
Denition 14. Given an SCSP problem P and a function f : PL ! PL, f is
safe if Sol(P
It is easy to see that any local consistency algorithm, as dened in [4], can
be seen as a safe, intensive, and solution-correct function.
From ~
f((P )), applying the concretization function
, we get
which therefore is again over the concrete semiring (the same as P ). The following
property says that, under certain conditions, P and
f((P ))) are
equivalent. Figure 5 describes such a situation. In this gure, we can see that
several partial order lines have been drawn:
{ on the abstract side, function ~
f takes any element closer to the bottom,
because of its intensiveness;
{ on the concrete side, we have that
f((P ))) is smaller than both P and
because of the
properties
of
f((P ))) is smaller than
because of the monotonicity of
f((P ))) is higher than f sol (P ) because of the solution-correctness of
~
f sol (P ) is smaller than P because of the way f sol (P ) is constructed;
if is idempotent, then it coincides with the glb, thus we have that
f((P ))) is higher than f sol (P ), because by denition the glb is
the higher among all the lower bounds of P and
a
a
x
O
a
a
a
a
a
a
x idempotent
concrete problems abstract problems
Fig. 5. The general abstraction scheme, with idempotent.
Theorem 6. Given an SCSP problem P over S, consider a function ~
f on (P )
which is safe, solution-correct, and intensive. Then, if is idempotent, Sol(P
Proof. Take any tuple t with value v in P , which is obtained by combining
the values of some subtuples, say two: us now consider the
abstract versions of v 1 and
f changes these values
by lowering them, thus we get ~
1 and ~
.
Since ~
f is safe, we have that v 0
~
f is solution-correct, thus v S
(v 0 ). By monotonicity of
, we have that
(v 0
2. This implies that
(v 0
(v 0
2 )), since
is idempotent by assumption and thus it coincides with the glb. Thus we have
that v S (
(v 0
(v 0
To prove that P and
f((P ))) give the same value to each tuple, we
now have to prove that
(v 0
(v 0
)). By commutativity of ,
we can write this as (v 1
(v 0
(v 0
and we have shown that v S
(v 0
(v 0
(v 0
(v 0
theorem does not say anything about the power of ~
f , which could make
many modications to (P ), or it could also not modify anything. In this last
case,
Figure 6), so
means that we have not gained anything in abstracting P . However, we can
always use the relationship between P and (P ) (see Theorem 4 and 5) to nd
an approximation of the optimal solutions and of the inconsistencies of P .
a f( ~ (P))
a
a
abstract problems
concrete problems
a
a
a
Fig. 6. The scheme when ~
f does not modify anything.
If instead ~
f modies all semiring elements in (P ), then if the order of the
concrete semiring is total, we have that
Figure
7), and thus we can work on
f((P ))) to nd the solutions of P . In
f((P ))) is lower than P and thus closer to the solution.
Theorem 7. Given an SCSP problem P over S, consider a function ~
f on (P )
which is safe, solution-correct, and intensive. Then, if is idempotent, ~
f mod-
ies every semiring element in (P ), and the order of the concrete semiring is
total, we have that P wS
Proof. Consider any tuple t in any constraint of P , and let us call v its semiring
value in P and v sol its value in f sol (P ). Obviously, we have that v sol S v.
Now take v
((v)). By monotonicity of , we cannot have v <S v 0 . Also,
by solution-correctness of ~
f , we cannot have v 0 <S v sol . Thus we must have
which proves the statement of the theorem. 2
Notice that we need the idempotence of the operator for Theorem 6 and
7. If instead is not idempotent, then we can prove something weaker. Figure
8 shows this situation. With respect to Figure 5, we can see that the possible
non-idempotence of changes the partial order relationship on the concrete
side. In particular, we don't have the problem
f((P ))) any more, nor the
problem f sol (P ), since these problems would not have the same solutions as P
a
a
a
l
r
d
e
r
abstract problems
concrete problems
a
a
a
a
a
Fig. 7. The scheme when the concrete semiring has a total order.
and thus are not interesting to us. We have instead a new problem P 0 , which is
constructed in such a way to \insert" the inconsistencies of
obviously lower than P in the concrete partial order, since it is the same
as P with the exception of some more 0's, but the most important point is that
it has the same solutions as P .
a
a
a
a
a
a
a
x not idempotent
concrete problems abstract problems
Fig. 8. The scheme when is not idempotent.
Theorem 8. Given an SCSP problem P over S, consider a function ~
f on (P )
which is safe, solution-correct and intensive. Then, if is not idempotent, consider
P 0 to be the SCSP which is the same as P except for those tuples which
have semiring value 0 in
these tuples are given value 0 also in P 0 .
Then we have that Sol(P
Proof. Take any tuple t with value v in P , which is obtained by combining
the values of some subtuples, say two: us now consider the
abstract versions of v 1 and
f changes these values
by lowering them, thus we get ~
1 and ~
.
Since ~
f is safe, we have that v 0
f is
solution-correct, thus v S
(v 0 ). By monotonicity of
, we have that
(v 0
2. Thus we have that v S
(v 0
2.
Now suppose that
(v 0
This implies that also Therefore, if we
set again the combination of v 1 and v 2 will result in v, which is 0. 2
Summarizing, the above theorems can give us several hints on how to use the
abstraction scheme to make the solution of P easier: If is idempotent, then
we can replace P with
f(P ))), and get the same solutions (by Theorem
6). If instead is not idempotent, we can replace P with P 0 (by Theorem 8). In
any case, the point in passing from P to
is that the new
problem should be easier to solve than P , since the semiring values of its tuples
are more explicit, that is, closer to the values of these tuples in a completely
solved problem.
More precisely, consider a branch-and-bound algorithm to nd an optimal
solution of P . Then, once a solution is found, its value will be used to cut away
some branches, where the semiring value is worse than the value of the solution
already found. Now, if the values of the tuples are worse in the new problem
than in P , each branch will have a worse value and thus we might cut away
more branches. For example, consider the fuzzy semiring (that is, we want to
maximize the minimum of the values of the subtuples): if the solution already
found has value 0:6, then each partial solution of P with value smaller than
or equal to 0:6 can be discarded (together with all its corresponding subtree in
the search tree), but all partial solutions with value greater than 0:6 must be
considered; if instead we work in the new problem, the same partial solution with
value greater than 0:6 may now have a smaller value, possibly also smaller than
0:6, and thus can be disregarded. Therefore, the search tree of the new problem
is smaller than that of P .
Another point to notice is that, if using a greedy algorithm to nd the initial
solution (to use later as a lower bound), this initial phase in the new problem
will lead to a better estimate, since the values of the tuples are worse in the new
problem and thus close to the optimum. In the extreme case in which the change
from P to the new problem brings the semiring values of the tuples to coincide
with the value of their combination, it is possible to see that the initial solution
is already the optimal one.
Notice also that, if is not idempotent, a tuple of P 0 has either the same
value as in P , or 0. Thus the initial estimate in P 0 is the same as that of P
(since the initial solution must be a solution), but the search tree of P 0 is again
smaller than that of P , since there may be partial solutions which in P have
value dierent from 0 and in P 0 have value 0, and thus the global inconsistency
may be recognized earlier.
The same reasoning used for Theorem 4 on (P ) can also be applied to
~
f((P )). In fact, since ~
f is safe, the solutions of ~
have the same values
as those of (P ). Thus also the optimal solution sets coincide. Therefore we have
that Opt( ~
contains all the optimal solutions of P if the abstraction is
order-preserving. This means that, in order to nd an optimal solution of P , we
can nd all optimal solutions of ~
use such a set to prune the
search for an optimal solution of P .
Theorem 9. Given an SCSP problem P over S, consider a function ~
f on P
which is safe, solution-correct and intensive, and let us assume the abstraction
is order-preserving. Then we have that Opt(P ) Opt( ~
Proof. Easy follows from Theorem 4 and from the safeness of ~
f . 2
Theorem 5 can be adapted to ~
thus allowing us to use an
optimal solution of ~
f((P )) to nd both a lower and an upper bound of an
optimal solution of P .
5 Some abstraction mappings
In this section we will list some semirings and several abstractions between them,
in order to provide the reader with a scenario of possible abstractions that he/she
can use, starting from one of the semirings considered here. Some of these semirings
and/or abstractions have been already described in the previous sections
of the paper, however here we will re-dene them to make this section self-
contained. Of course many other semirings could be dened, but here we focus
on the ones for which either it has been dened, or it is easy to imagine, a system
of constraint solving. The semirings we will consider are the following ones:
{ the classical one, which describes classical CSPs via the use of logical and
and logical or:
{ the fuzzy semiring, where the goal is to maximize the minimum of some
values over [0; 1]:
{ the extension of the fuzzy semiring over the naturals, where the goal is to
maximize the minimum of some values of the naturals:
{ the extension of the fuzzy semiring over the positive reals:
{ the optimization semiring over the naturals, where we want to maximize the
sum of costs (which are negative integers):
{ the optimization semiring over the negative reals:
{ the probabilistic semiring, where we want to maximize a certain probability
which is obtained by multiplying several individual probabilities. The idea
here is that each tuple in each constraint has associated the probability of
being allowed in the real problems we are modeling, and dierent tuples in
dierent constraints have independent probabilities (so that their combined
probability is just the multiplication of their individual probabilities) [10].
The semiring is:
{ the subset semiring, where the elements are all the subsets of a certain set,
the operations are set intersection and set union, the smallest element is the
empty set, and the largest element is the whole given set:
We will now dene several abstractions between pairs of these semirings. The
result is drawn in Figure 9, where the dashed lines denote the abstractions that
have not been dened but can be obtained by abstraction composition. In reality,
each line in the gure represents a whole family of abstractions, since each h;
pair makes a specic choice which identies a member of the family. Moreover,
by dening one of this families of abstractions we do not want to say that there
do not exist other abstractions between the two semirings.
It is easy to see that some abstractions focus on the domain, by passing to
a given domain to a smaller one, others change the semiring operations, and
others change
1. from fuzzy to classical CSPs: this abstraction changes both the domain and
the operations. The abstraction function is dened by choosing a threshold
within the interval [0; 1], say x, and mapping all elements in [0; x] to F and
all elements in (x; 1] to T. Consequently, the concretization function maps T
to 1 and F to x. See Figure 2 as an example of such an abstraction. We recall
that all the abstractions in this family are order-preserving, so Theorem 4
can be used.
2. from fuzzy over the positive reals to fuzzy CSPs: this abstraction changes
only the domain, by mapping the whole set of positive reals into the [0; 1]
interval. This means that the abstraction function has to set a threshold, say
x, and map all reals above x into 1, and any other real, say r, into r
x . Then,
the concretization function will map 1 into +1, and each element of [0; 1),
say y, into y x. It is easy to prove that all the members of this family of
abstractions are order-preserving.
3. from probabilistic to fuzzy CSPs: this abstraction changes only the multiplicative
operation of the semiring, that is, the way constraints are combined.
In fact, instead of multiplying a set of semiring elements, in the abstracted
version we choose the minimum value among them. Since the domain remains
the same, both the abstraction and the concretization functions are
the identity (otherwise they would not have the properties required by a Galois
insertion, like monotonicity). Thus this family of abstractions contains
just one member.
It is easy to see that this abstraction is not order-preserving. In fact, consider
for example the elements 0:6 and 0:5, obtained in the abstract domain by
These same combinations in
the concrete domain would be 0:7
resulting in two elements which are in the opposite order with respect to 0:5
and 0:6.
4. from optimization-N to fuzzy-N CSPs: here the domain remains the same
(the negative integers) and only the multiplicative operation is modied.
Instead of summing the values, we want to take their minimum. As noted in
a previous example, these abstractions are not order-preserving.
5. from optimization-R to fuzzy-R CSPs: similar to the previous one but on
the negative reals.
6. from optimization-R to optimization-N CSPs: here we have to map the negative
reals into the negative integers. The operations remain the same. A
possible example of abstraction is the one where
It is not order-preserving.
7. from fuzzy-R to fuzzy-N CSPs: again, we have to map the positive reals into
the naturals, while maintaining the same operations. The abstraction could
be the same as before, but in this case it is order-preserving (because of the
use of min instead of sum).
8. from fuzzy-N to classical CSPs: this is similar to the abstraction from fuzzy
CSPs to classical ones. The abstraction function has to set a threshold, say x,
and map each natural in [0; x] into F , and each natural above x into T . The
concretization function maps T into +1 and F into x. All such abstractions
are order-preserving.
9. from subset CSPs to any of the other semirings: if we want to abstract to
a semiring with domain A, we start from the semiring with domain P(A).
The abstraction mapping takes a set of elements of A and has to choose
one of them by using a given function, for example min or max. The concretization
function will then map an element of A into the union of all the
corresponding sets in P(A). For reason similar to those used in Example 3,
some abstractions of this family may be not order-preserving.
S_prob S_fuzzy S_CSP
Fig. 9. Several semiring and abstractions between them.
6 Related work
We will compare here our work to other abstraction proposals, more or less
related to the concepts of constraints.
Abstracting valued CSPs. The only other abstraction scheme for soft constraint
problems we are aware of is the one in [7], where valued CSPs [21] are abstracted
in order to produce good lower bounds for the optimal solutions. The concept of
valued CSPs is similar to our notion of SCSPs. In fact, in valued CSPs, the goal
is to minimize the value associated to a complete assignment. In valued CSPs,
each constraint has one associated element, not one for each tuple of domain
values of its variables. However, our notion of soft CSPs and that in valued
CSPs are just dierent formalizations of the same idea, since one can pass from
one formalization to the other one without changing the solutions, provided that
the partial order is total [2]. However, our abstraction scheme is dierent from
the one in [7]. In fact, we are not only interested in nding good lower bounds
for the optimum, but also in nding the exact optimal solutions in a shorter
time. Moreover, we don't dene ad hoc abstraction functions but we follow the
classical abstraction scheme devised in [5], with Galois insertions to relate the
concrete and the abstract domain, and locally correct functions on the abstract
side. We think that this is important in that it allows to inherit many properties
which have already been proven for the classical case. It is also worth noticing
that our notion of an order-preserving abstraction is related to their concept of
aggregation compatibility, although generalized to deal with partial orders.
Abstracting classical CSPs. Other work related to abstracting constraint problems
proposed the abstraction of the domains [11, 12, 22], or of the graph topology
(for example to model a subgraph as a single variable or constraint) [9]. We
did not focus on these kinds of abstractions for SCSPs in this paper, but we believe
that they could be embedded into our abstraction framework: we just need
to dene the abstraction function in such a way that not only we can change the
semiring but also any other feature of the concrete problem. The only dierence
will be that we cannot dene the concrete and abstract lattices of problems by
simply extending the lattices of the two semirings.
A general theory of abstraction. A general theory of abstraction has been proposed
in [16]. The purpose of this work is to dene a notion of abstraction that
can be applied to many domains: from planning to problem solving, from theorem
proving to decision procedures. Then, several properties of this notion are
considered and studied. The abstraction notion proposed consists of just two
formal systems 1 and 2 with languages L 1 and L 2 and an eective total function
Much emphasis is posed in
[16] onto the study of the properties that are preserved by passing from the concrete
to the abstract system. In particular, one property that appears to be very
desirable, and present in most abstraction frameworks, is that what is a theorem
in the concrete domain, remains a theorem in the abstract domain (called the
TI property, for Theorem Increasing).
It is easy to see that our denition of abstraction is an instance of this general
notion. Then, to see whether our concept of abstraction has this property, we
rst must say what is a theorem in our context. A natural and simple notion
of a theorem could be an SCSP which has at least one solution with a semiring
value dierent from the 0 of the semiring. However, we can be more general that
this, and say that a theorem for us is an SCSP which has a solution with value
greater than or equal to k, where k 0. Then we can prove our version of the
TI property:
Theorem 10 (our TI property). Given an SCSP P which has a solution
with value v k, then the SCSP (P ) has a solution with value v 0 (k).
Proof. Take any tuple t in P with value v > k. Assume that
abstracting, we have solution correctness of 0 , we have
that v S
(v 0 ). By monotonicity of , we have that (v) S 0 (
(v
again by monotonicity of , we have thus by transitivity
Notice that, if we consider the boolean semiring (where a solution has either
value true or false), this statement reduces to saying that if we have a solution
in the concrete problem, then we also have a solution in the abstract problem,
which is exactly what the TI property says in [16]. Thus our notion of abstraction,
as dened in the previous sections, on one side can be cast within the general
theory proposed in [16], while on the other side it generalizes it to concrete and
abstract domains which are more complex than just the boolean semiring. This
is predictable, because, while in [16] formulas can be either true (thus theorems)
or false, here they may have any level of satisfaction which can be described by
the given semiring.
Notice also that, in our denition of abstraction of an SCSP, we have chosen
to have a Galois insertion between the two lattices (which corresponds to
the concrete semiring S) and h ~
(which corresponds to the abstract semiring
~
S). This means that the ordering in the two lattices coincide with those of the
semirings. We could have chosen dierently: for example, that the ordeing of
the lattices in the abstraction be the opposite of those of those in the semirings.
In that case, we would not have had property TI. However, we would have the
dual property (called TD in [16]), which states that abstract theorems remain
theorems in the concrete domain. It has been shown that such a property can
be useful in some application domains, such as databases.
7 Conclusions and future work
We have proposed an abstraction scheme for abstracting soft constraint prob-
lems, with the goal of nding an optimal solution, or a good approximation of it,
in shorter time. The main idea is to work on the abstract version of the problem
and then bring back some useful information to the concrete problem, to make
it easier to solve.
This paper is just a rst step towards the use of abstraction for helping to
nd the solution of a soft constraint problem in a shorter time. More properties
can probably be investigated and proved, and also an experimental phase is
necessary to check the real practical value of our proposal. We plan to perform
such a phase within the clp(fd,S) system developed at INRIA [14], which
can already solve soft constraints in the classical way (branch-and-bound plus
propagation via partial arc-consistency).
Another line for future research concerns the generalization of our approach
to include also domain and topological abstractions, as already considered for
classical CSPs.
Acknowledgments
This work has been partially supported by Italian MURST project TOSCA.
--R
Constraint Solving over Semirings.
Abstract interpretation: A uni
Systematic design of program analyis.
The calculus of fuzzy restrictions as a basis for exible constraint satisfaction.
Synthesis of abstraction hierarchies for constraint satisfaction by clustering approximately equivalent objects.
Uncertainty in constraint satisfaction problems: a probabilistic approach.
Eliminating interchangeable values in constraint satisfaction sub- problems
Interchangeability supports abstraction and reformulation for constraint satisfaction.
Partial constraint satisfaction.
Compiling semiring-based constraints with clp(fd
A theory of abstraction.
Consistency in networks of relations.
Constraint satisfaction.
Fuzzy constraint satisfaction.
Possibilistic constraint satisfaction problems
Valued Constraint Satisfaction Problems: Hard and Easy Problems.
An evaluation of domain reduction: Abstraction for unstructured csps.
Practical applications of constraint programming.
--TR
Partial constraint satisfaction
Possibilistic constraint satisfaction problems or MYAMPERSANDldquo;how to handle soft constraints?MYAMPERSANDrdquo;
A theory of abstraction
Semiring-based constraint satisfaction and optimization
Theories of abstraction
Abstract interpretation
Systematic design of program analysis frameworks
A CSP Abstraction Framework
An Abstraction Framework for Soft Constraints and Its Relationship with Constraint Propagation
Uncertainty in Constraint Satisfaction Problems
Abstracting Soft Constraints
Compiling Semiring-Based Constraints with clp (FD, S)
AbsCon
Semiring-Based CSPs and Valued CSPs
--CTR
Thomas Ellman , Fausto Giunchiglia, Introduction to the special volume on reformulation, Artificial Intelligence, v.162 n.1-2, p.3-5, February 2005
Didier Dubois , Henri Prade, Editorial: fuzzy set and possibility theory-based methods in artificial intelligence, Artificial Intelligence, v.148 n.1-2, p.1-9, August
Salem Benferhat , Didier Dubois , Souhila Kaci , Henri Prade, Bipolar possibility theory in preference modeling: Representation, fusion and optimal solutions, Information Fusion, v.7 n.1, p.135-150, March, 2006
Stefano Bistarelli , Francesco Bonchi, Soft constraint based pattern mining, Data & Knowledge Engineering, v.62 n.1, p.118-137, July, 2007
Giampaolo Bella , Stefano Bistarelli, Soft constraint programming to analysing security protocols, Theory and Practice of Logic Programming, v.4 n.5-6, p.545-572, September 2004 | fuzzy reasoning;abstraction;constraint propagation;constraint solving;soft constraints |
635238 | Families of non-IRUP instances of the one-dimensional cutting stock problem. | In case of the one-dimensional cutting stock problem (CSP) one can observe for any instance a very small gap between the integer optimal value and the continuous relaxation bound. These observations have initiated a series of investigations. An instance possesses the integer roundup property (IRUP) if its gap is smaller than 1. In the last 15 years, some few instances of the CSP were published possessing a gap greater than 1.In this paper, various families of non-IRUP instances are presented and methods to construct such instances are given, showing in this way, there exist much more non-equivalent non-IRUP instances as computational experiments with randomly generated instances suggest. Especially, an instance with gap equal to 10/9; is obtained. Furthermore, an equivalence relation for instances of the CSP is considered to become independent from the real size parameters. | Introduction
The one-dimensional cutting stock problem (CSP) is as follows: Given an unlimited number
of pieces of identical stock material of length L (e.g. wooden length, paper reels, etc.) the
task is to cut b i pieces of length ' i for while minimizing the number of
stock material pieces needed.
Throughout this paper we will use the abbreviation E := (m; L; '; b) for an instance of
the CSP with loss of generality we will
assume L
The classical solution approach is due to Gilmore and Gomory [5]. A non-negative
integer vector a
is called a (feasible) cutting pattern of
cutting pattern (shortly: pattern) a is said to be maximal if ' T a+'m ? L;
a is proper if a b, i.e. a i b i , ng denote the index set of all
maximal patterns a If the integer variable x j denotes the times
pattern a j is used, the CSP can be modelled as follows:
Now the common solution approach consists of solving the corresponding continuous (LP)
relaxation
where IR is the set of real numbers. Then, based on an optimal solution of (2), integer
solutions for (1) are constructed by means of suitable heuristics (cf. [5], [13]).
Let z denote the optimal value of (1) and (2) for an instance
E, respectively. Practical experience and many computational tests have shown (cf. e.g.
[13], [16]) there is only a small gap \Delta(E) := z (E) \Gamma z c (E) for any instance E.
These observations have initiated a number of investigations. A set P of instances E has
the integer round-up property (IRUP) if \Delta(E) ! 1 for all
\Delta(E) 1 is called non-IRUP instance in the following. It is well-known that the CSP
does not possess the IRUP. In [6] a first counter-example was given with gap equal to 1. In
the last decade instances were found having a gap larger then 1 ([3], [12]). Since the gaps
of these instances are less than 2, the modified integer round-up property (MIRUP) was
defined in [14]: a set P of instances E possesses the MIRUP if \Delta(E) ! 2 for all
It is conjectured in [12] that the one-dimensional CSP possesses the MIRUP.
Since in the numerical tests non-IRUP instances occur relatively rarely, the impression
could arise that there exist only a limited number of non-IRUP instances. In this paper
we will present families with an infinite number of non-equivalent non-IRUP instances.
These investigations could be helpful for developing and testing of exact solution approaches
for the one-dimensional CSP. Especially if existing exact solution algorithms
(branch-and-bound algorithms are presented e.g. in [13] for the CSP and in [7] for the
bin packing problem) are applied to non-IRUP instances then computational difficulties
can arise. Moreover, they could have importance also for higher dimensional CSPs. For a
comprehensive overview of recent work in connection with cutting and packing problems
we refer to [2].
Throughout this paper we will use some common notations. Let M denote the set of
instances possessing IRUP. Especially for theoretical investigations, a special case of the
CSP is of interest which is named divisible case. An instance E belongs to the set D of
'divisible case'-instances if L is an integer multiple of any piece length. For example, the
instance
(presented in [11]) belongs to D and has gap \Delta(E D which is the
largest for D ever found.
In our investigations we allow also rational sizes, i.e. we assume ' 2 IQ m
and
IQ is the set of rational numbers. In the most cases we will consider equivalent instances
with integer sizes.
In the next section we will introduce an equivalence relation for instances of the CSP in
order to characterize different instances in a better way and to become independent from
the real sizes. Then in section 3, we will consider families of non-IRUP instances in the
divisible case. Families of non-IRUP instances not restricted to the divisible case will
be presented in section 4. Constructive methods to obtain non-IRUP instances will be
discussed in the fifth section followed by some concluding remarks.
2 Equivalence of Cutting Stock Problems
Among some possibilities of defining equivalence relations for instances of CSPs we will
consider here only the kind of equivalence which is based on the cutting patterns. Through-out
this section we assume all input-data to be integral.
2.1 Pattern equivalence
Let the instances
cutting patterns, respectively.
Without loss of generality the cutting patterns are assumed to be sorted lexicographically
decreasing.
are called equivalent (pattern-equivalent) if
Hence, any feasible pattern a j of E is also feasible for E and vice versa.
For example, the following instances E are equivalent to the instance
26 ' 3 30,
for all d; d
Let A(E) denote the m \Theta n-matrix containing all maximal patterns of E. Hence, E and
are equivalent if and only if
Then the class K(E) of instances equivalent to E can be characterized as follows:
Obviously,
K(E). For that
reason, K(E) can be viewed as the intersection of ZZ m+1 and a cone which is induced by
A(E).
2.2 Characterization by system of inequalities
Let e i denote the i-th unit vector of ZZ m (i 2 I).
Assertion 1 Let a denote the maximal cutting patterns of instance
b). Then the instance is equivalent to E if and only if ('; L) 2 ZZ m+1 is
feasible for
Proof: ")" E is equivalent to E, that means A(E) is pattern matrix of E. Hence, (4),
(5) and (6) are fulfilled.
"(" Let ('; L) 2 ZZ m+1 fulfill (4), (5) and (6). Because of (6) it follows
1. Because of (4) any maximal cutting pattern of E is also feasible with respect to
E. Because of (5) any maximal cutting pattern of E is also maximal with respect to E.
Assume there exists a maximal cutting pattern a of E which is not feasible or is not
maximal with respect to E.
If a is not maximal but feasible with respect to E then a with respect
to E and there exists a 2 ZZ such that a with respect to E .
occurs in (4) a contradiction arises to a is maximal with respect to E.
If a is not feasible with respect to E , i.e. '
a there exists a pattern ea with
which is maximal with respect to E . Hence, ' T ea L (because of
(because of (5)). This leads to a contradiction to a is feasible
with respect to E.
2.3 Dominance
Let a pattern matrix In order to get an instance of the cutting
stock problem with pattern matrix A, one has to determine a solution of (4) - (6).
Because of the condition numerous constraints in (4) and (5), respectively,
will automatically be fulfilled if other constraints are fulfilled as the following example
shows. Let
From ' T a 1 L and 9. That means, at most
the constraint ' T a 1 L is active in (4). Analogously, if ' T (a 9
8. Hence, again only one of the inequalities in (5) is
needed.
Definition 2 The cutting pattern a dominates the cutting pattern a if ' T a ' T a for all
fulfilling
For a
Assertion 2 Let the patterns a and a be given. Then:
a ? d a
Proof: ")" If s 1 (a) ! s 1 (a) then a 1 ! a 1 and hence, a 6? d a.
f1g such that s j (a) s j (a) for
We define the vector ' 2 ZZ m
such that ' T a
Then
-z
=: ff
-z
-z
-z
It follows
sufficiently large since ff, fi and fl are independent of ' 0 .
"(" Now we assume s i (a) s i (a) for all i and
-z
-z
-z
0:
A pattern a is said to be dominant if there does not exist any (feasible) pattern a, a 6= a,
with a ! d a. If ' and L are fixed then a pattern a can easily be identified to be dominant.
Let
Assertion 3 The pattern a is dominant if and only if
ffi(a). Then there exists i 2 I n fmg with a
the pattern a dominates a, a cannot be dominant.
If a is not dominant then there exists a pattern a, a 6= a, with a ! d a. The following
procedure can be applied at least once:
Let j be the index determined by a
-z
Otherwise the pattern ea is defined by
a i otherwise.
Then
Furthermore, a ! d ea since s i
because of (7), we have ' T
Otherwise the procedure has to be repeated with ea instead of a.
Let J d ae ng denote the index-set of dominant cutting patterns in A. Then:
Assertion 4 ('; L) 2 ZZ m+1
fulfills (4) if and only if ('; L) is feasible for
Proof: Since for any cutting pattern a j there exists a column a k with k 2 J d and a j ! d a k ,
Furthermore, a maximal pattern a is said to be non-dominant if there does not exist any
maximal pattern a, a 6= a, with a ! d a. Non-dominant patterns can be identified similar
to Assertion 3. Let J nd ae ng be the index-set of non-dominant cutting patterns in
A. Then:
Assertion 5 ('; L) 2 ZZ m+1
fulfills (5) if and only if ('; L) is feasible for
Proof: Since for any cutting pattern a j there exists a column a k with k 2 J nd and a j ? d a k ,
In order to illustrate the usage of dominance let us consider the instance
presented in [4] with \Delta(E G 1:0667. The instance EG has 41 feasible patterns.
Among them are 20 maximal, 7 dominant and 5 non-dominant patterns. Hence, the class
K(EG ) of equivalent instances can be described as solution set of the system of inequalities:
5:
is such a solution, EG is equivalent to
3 Divisible Case
In 1994 Nica ([9]) proposed a first infinite family of non-equivalent non-IRUP instances
of the CSP which belongs to D. For the sake of completeness we repeat here his result.
be given such that 2 . The length L
of the stock material is defined to be \Pi m
. Accordingly to the divisible case, the length
' i of the ith piece equals L=k i (i 2 I).
Using proposed in [9], are as follows:
km
Note, the instance ED defined in (3) belongs to this family with
Theorem 1 (Nica 1994) Let k 2 ZZ m be given such that 2
are pairwise relatively prime. If 1 b m ! km and if the order demands
cannot be cut using less than cutting patterns, then
Proof: Because of construction we have
z c
km
km
km
and z c (E(k))+1=km ? m \Gamma 1. Since the parameters k i are assumed to be pairwise relatively
prime there does not exist any non-negative integer vector a
a i
This means, there does not exist any proper and trim-less pattern.
If represents a maximal pattern then
a i
)c
a i
a i
a i
since km
a i
If we consider any set of cutting patterns a
fulfilling
a mj (m
Because of
e
we obtain
a
since m 3 and k 1 2 are assumed. Hence, at least one more piece of length ' m is
ordered as can be cut with cutting patterns.
Note, the gap
km
d
is asymptotically bounded by 1 1=km and tends to 1 if km is increased.
Next we show by means of an example that non-IRUP instances can also be obtained in
the divisible case if some of the assumptions in Theorem 1 are not fulfilled. For this let us
consider the instance
with
and the k i are not pairwise relatively
prime. Using the proper relaxation bound (cf. [10]) can be verified. Moreover,
Hence, separating this pattern a new instance
can be derived from E 1 with
Now we will consider another family of instances of D with be any positive
integer and let . L is defined to be the smallest common multiple
of the k i (or a multiple of it). For abbreviation, let the family of instances
9p+2
will be investigated. Notice, ED defined in (3) belongs to this family with
Theorem 2 For any integer p 1, the instance E(p) does not belong to M .
can be assumed. Because of E(p) 2 D we have
z
Because of z c 2, the instance E(p) would belong to M if and only if there exist any
proper pattern a
a 1
We will show that such a pattern cannot occur.
Since in the divisible case
feasible pattern a it follows
Since
the following inequality must be fulfilled:
Because of (9) we have
Furthermore, with
it follows that the pattern a has to fulfil the inequality
Summarizing (11) and (12), if E(p) 2 M then there must exist a proper pattern a with
Note, the left-hand side is integral. In case a the interval has the form
which contains the unique integer \Gamma2q \Gamma 1. The corresponding equation
can be transformed in
and we find for a 1 q \Gamma 1, a 1 integer, there exists no non-negative integer value a 2 .
In the other case, a 3 5, because of the interval in formula (13) can be written
in form
For the interval size is less than 5
1. Hence, there is no integer contained
in the interval. Therefore, no such pattern can exist.
4 Non-Divisible Case
The family E(t) with
is considered next.
Note, the family does not contain only non-equivalent instances. For
example, instances E(t) are equivalent and
Theorem 3 The instances E(t), t 2 T , do not belong to M .
Proof: We show z c
Because of ' T 3. On the other hand, z c (E(t)) 3 since2B
Suppose z the optimal solution must contain only proper
patterns a with ' T a = L. We consider the pattern with a because of
is the only trim-less proper pattern. But now there does not
exist any proper trim-less pattern with a 2 1 since ' 5 is already used.
Using the substitution p=q
obtain from E(t) a further family of instances by removing the smallest piece:
Theorem 4 Let p=q 2 (0; 1), then the instance E(p; q) does not belong to M .
The proof is similar to that of Theorem 3.
In case of p ? 1 the length of the smallest piece can be reduced to q violating
the non-IRUP property. Let
we have to show z 4. Because of
an integer solution with 3 patterns has to have a total
trim-loss of units. If such a solution exists then it must contain the pattern
with the trim-loss of p units. Since any proper pattern containing at least one piece of
length ' 2 but non of ' 5 has a trim-loss of at least p units, the total trim-loss is at least 2p
which is a contradiction.
Next we will consider the family
of non-equivalent CSP instances.
Theorem 5 The instances E(p) defined in (16) do not belong to M .
Proof: Because
1), we have
z c (E(p))
Because of ' T the total trim-loss has to be equal to
assumed. But the best proper pattern a with a
which has a trim-loss of must
hold.
Notice, lim p!1 z c which corresponds to lim p!1 k
Moreover, since the gaps are asymptotically bounded by 1 the gaps tend to
1 for p !1. The maximal gap for this family is 29/28 obtained for
The following procedure leads to further non-equivalent non-IRUP instances if p 5:
If pieces of length ' are composed to larger ones then no better integer solution
can arise. If additionally the value of the continuous relaxation does not increase then
non-IRUP instances will be constructed. Let
2), such that p
loss of generality, let . Then the non-equivalent instance E 0 (p) derived from
E(p) is as follows in case of
Using
times the pattern ' 0+ p+1
instead of
-times the pattern (1; 0; 0; 0; in (17), a feasible solution of the LP relaxation
with the same objective function value can be obtained.
For example, let 5. Then from
using
Furthermore, if p 5, odd, ld(p+1) 62 ZZ and
because of construction,
the length '
3 now can be reduced to 2p 2 since no new proper pattern gets feasible.
On the other hand, new non-proper patterns can lead to a smaller optimal value of the
continuous relaxation. In the example we obtain
with \DeltaE 00
Next we will consider a modification of E 00 (5). Let
Here we have z c 7. Thus E(b) 62 M . (The
verification of z done using the cutting plane algorithm proposed in [8].)
proper pattern a of
5. If a 2 the same gap arises;
for a 2 smaller gaps occur. The corresponding residual
instance of E(b) is
Residual (cf. [12]) means there exists an optimal solution x c of the LP relaxation with
components less than 1 so that x c cannot be used for a further reduction of the order
demands.
Non-IRUP instances can also be found for special classes of CSPs. Let us consider instances
with the property k
and For short we refer such instances as (1; k)-instances. It is known, all
k)-instances belong to M for k 2 f2; 3g. But for any k 4 there exist non-IRUP
k)-instances. Such instances for are
and
A family of non-IRUP (1; k)-instances with k 6 is the following:
Theorem 6 The instances E k , k 4, do not belong to M .
Proof for k 6: It is easy to verify that the instance E k is a (1; k)-instance. Since
at least 3 smaller pieces can be contained in a pattern a with a Because
of follows. The reduced instance E 0
k obtained form E k by omitting the
3 pieces of length ' 1 and by setting the stock length to correspond to the instance
E(t) with considered in Theorem 3. Therefore E 0
and hence, since
z c
Remember, the instance E 0
namely we
have any i. An instance is said to be a 1)-instance
if for any i k
The proof of Theorem 6 suggests the following procedure to construct non-IRUP (1; k)-
instances. Let a non-IRUP
In order to obtain a non-IRUP (1; k)-instances at first an instance
constructed. The piece lengths are increased by t (t sufficiently large), i.e. ' 0
1)t. Then, any feasible (with respect to E) pattern a with
e T a non-feasible (with
respect to E) pattern a with e T a tends to infinity
then
i tends to 1=(p + 1) for i 2 I. If we add dz c (E)e pieces of length ' 0(' 0sufficiently
large) and increase the stock length by ' 0
can be reached for k 2p 2. Hence, a (1; k)-instance with m+1 piece types is obtained.
Let us now consider the set of (k; k+1)-instances with k 2 ZZ
(1,2)-instances belong to M , but for any k 2 there exist non-IRUP (k; k+1)-instances
as will be shown next. For k 2 let
1)-instance for k 2. Note, in order to make more clear the inherent
structur we do not sort the piece lengths in decreasing order.
Theorem 7 The instance does not belong to M .
z c
If z supposed then no trim-loss must occur. We consider patterns a with
must contain k +1
pieces. Because of construction, pattern is not proper.
Furthermore, but again this pattern is not proper since b 7 = 1.
there is no trim-less pattern a with a
z
5 Further Constructions
be an instance not belonging to M with
1. Moreover, let q 2 IQ with q 2=' m 0 . Then the instance E is defined by
Theorem 8 If for any a 0 2 ZZ m
then E does not belong to M too.
Proof: At first we prove dz c (E)e dz c
We consider an optimal solution of the LP relaxation of E 0 characterized by the set
of feasible patterns and corresponding coefficients x j ,
Now we assign to any a 0 j (j 2 J ) an m-dimensional pattern a j as follows:
a
The patterns a j are feasible with respect to E. Then,
Using (dz c ))-times the feasible pattern a 1 := e 1
a 1j x
Supplementing (p \Gamma 1)=p-times the pattern a 2 := pe 2 and
the pattern a 3 := pe 3 , a feasible solution of the continuous relaxation of E is found (which
is not-necessarily optimal). Hence,
z c (E) z c
Next we show z (E) ? dz c 1. Because of patterns a j with
a are needed to cut the b 1 pieces of length ' 1 . If we suppose an integer
solution for patterns then all pieces of length ' 2 must be in one
pattern a with a the total length in a not covered with ' 2 -pieces equals qL 0 +1.
all the pieces with lengths ' 4 cannot be cut with these dz c (E 0 )e
patterns a j . At least one piece (with length q' 0
remains unpacked. Since
this piece and the piece of length ' 3 cannot be cut in pattern a. Hence, one more pattern
is needed, that means z (E) ? dz c
Let us consider the instance
which is equivalent to that proposed by Gau ([4], cf. (8)). Here we have
If we apply the procedure corresponding to Theorem 8 with 2, the instance
results. For this instance we get z c
obtained. This is the largest gap found so far ([15]).
It is remarkable, although ' there is no way to cut the order
demands b i with 3 patterns.
If this procedure is applied to E with then the gaps 1.0741, 1.0556,
1.0444 and 1.0394 result.
Note, the repeated application of Theorem 8 is possible but does not succeed with larger
gaps.
There are some other possibilities to obtain non-IRUP instances. Let an instance
given. If the order demand of the m-th piece is increased by some units, say
three cases can occur for
If z c
If z c (E ) dz c (E)e and z
Otherwise, E and E have the same behavior.
The instances considered in this paper are mostly residual instances. But, if the order
demand b of a non-IRUP instance E is increased by an non-negative integer combination
of patterns occurring in an LP solution of E then a non-IRUP instance is obtained very
often. For example, if we use ED defined in (3) then we get
To verify such statements stronger relaxations of the CSP are needed as proposed in [10]
and [8].
6 Concluding Remarks
In this paper families of non-IRUP instances of the CSP are presented. Since the gaps
tend to 1 for increasing parameters counter-examples for the MIRUP-conjecture, if such
exist, have probably small ratios between stock material length and piece lengths.
There remain some open questions. Especially the efficient identification of instances to
be a non-IRUP instance is of importance for exact solution approaches.
--R
Cutting and packing.
The duality gap in trim problems.
A linear programming approach to the cutting-stock problem (Part I)
An instance of the cutting stock problem for which the rounding property does not hold.
Knapsack Problems - Algorithms and Computer Implemen- tations
Solving cutting stock problems with a cutting plane method.
General counterexample to the integer round-up property
Tighter relaxations for the cutting stock problem
About the gap between the optimal values of the integer and continuous relaxation one-dimensional cutting stock problem
Theoretical investigations on the Modified integer Round-Up Property for the one-dimensional cutting stock problem
A branch-and-bound algorithm for solving one-dimensional cutting stock problems exactly
The modified integer round-up property of the one-dimensional cutting stock problem
A new 1csp heuristic.
Heuristics for the one-dimensional cutting stock problem: A computational study
--TR
Knapsack problems: algorithms and computer implementations
Computers and Intractability | integer round-up property;continuous relaxation;one-dimensional cutting stock problem |
635239 | A framework for the greedy algorithm. | Perhaps the best known algorithm in combinatorial optimization is the greedy algorithm. A natural question is for which optimization problems does the greedy algorithm produce an optimal solution? In a sense this question is answered by a classical theorem in matroid theory due to Rado and Edmonds. In the matroid case, the greedy algorithm solves the optimization problem for every linear objective function. There are, however, optimization problems for which the greedy algorithm correctly solves the optimization problem for many--but not all--linear weight functions. Our intention is to put the greedy algorithm into a simple framework that includes such situations. For any pair (S,P) consisting of a finite set S together with a set P of partial orderings of S, we define the concepts of greedy set and admissible function. On a greedy set L S, the greedy algorithm correctly solves the naturally associated optimization problem for all admissible functions f consists of linear orders, the greedy sets are characterized by this property. A geometric condition sufficient for a set to be greedy is given in terms of a polytope and roots that generalize Lie algebra root systems. | Introduction
This paper concerns a classical algorithm in combinatorial optimization, the greedy
algorithm. The MINIMAL SPANNING TREE problem, for example, is solved by the
greedy algorithm: Given a nite graph G with weights on the edges, nd a spanning
tree of G with minimum total weight. At each step in the greedy algorithm that solves
this problem, there is chosen a set of edges T comprising the partial tree; an edge e of
minimum weight among the edges not in T (the greedy choice) is added to T so long as
contains no cycle.
A greedy algorithm makes a locally optimal choice in the hope that this will lead to
a globally optimal solution. Clearly, greedy algorithms do not always yield the optimal
solution. But for a wide range of important problems the greedy algorithm is quite
powerful; a variety of such applications can be found in standard texts such as those by
Lawler [8] or Papadimitriou and Steiglitz [9]. A natural question, precisely posed below,
is the following. For which optimization problems does the greedy algorithm give the
correct solution. In a sense this question is answered by a classical theorem in matroid
theory due to Rado and Edmonds [4]. In the matroid case, the greedy algorithm always
solves the optimization problem. That is, the greedy algorithm solves the optimization
problem for every linear objective function. There are situations, however, for which
the greedy algorithm works for many - but not all - linear objective functions. A simple
framework for such problems is suggested below.
To make the question precise, consider a set system (S; I) consisting of a nite set
S together with a nonempty collection I of subsets of S, called independent sets, closed
under inclusion. Given a weight function f R, extend this function linearly to
the collection of subsets A S by dening
a2A
There is a natural combinatorial optimization problem associated with the set system
I).
OPTIMIZATION PROBLEM.
Given a weight function f nd an independent set with the greatest weight.
In the spanning tree problem, S is the set of edges of the graph G and the independent
sets are the acyclic subsets of edges. Minimum and maximum in this problem are
interchanged by negating the weights.
The greedy algorithm for this optimization problem is simply:
GREEDY ALGORITHM
while S 6= ; do
remove from S an element a of largest weight.
I [ fag is independent then
By the theorem of Edmonds and Rado referred to earlier, the following statements
are equivalent for the set system (S; I). Here B denotes the set of bases, a basis being
a maximum independent set.
1. (S; I) is a matroid.
2. The greedy algorithm correctly solves the combinatorial optimization problem
associated with (S; I) for any weight function f
3. Every basis has the same cardinality and, for every linear ordering on S, there
exists a such that for any A 2 B, if we
with the elements of B and A both in increasing order, then
a This ordering on k-element subsets of X is called the Gale order.
In the spanning tree problem, the acyclic subsets of edges comprise the independent
sets of a matroid. Many well known optimization problems, besides the spanning tree
problem, can be put into the framework of matroids. Texts by Lawler [8] and by
Papadimitriou and Steiglitz [9] contain numerous examples.
Matroids are characterized by the property that the greedy algorithm correctly solves
the optimization problem for any weight function. There are, however, non-matroids
for which the greedy algorithm correctly solves the appropriate optimization problem
for many - but not for every weight function. This is the case for the following, in
order of increased generality: symmetric matroids [2], sympletic matroids [1] and the
Coxeter matroids of Gelfand and Serganova [5, 6, 11]. It is our intention in this paper
to put the greedy algorithm into a simple framework that includes such examples. In
particular, Theorems 4.1 and 4.2 in this paper contain, as a special case, the classical
matroid theorem of Rado and Edmonds stated earlier. In [1] (Theorem 16), Borovik,
Gelfand and White prove that for symplectic matroids the greedy algorithm correctly
solves the optimization problem for all \admissible" functions. This is also a special case
of Theorem 4.1; symplectic matroids are used as an example in Sections 2, 3 and 4 of this
paper. Given a Coxeter system (W; P ) consisting of a nite irreducible Coxeter group
W and maximal parabolic subgroup P , in [11] (Theorem 1) a concrete realization of the
set W=P of left cosets is given as a collection of subsets B of an appropriate partially
ordered set S. Theorem 3 of [11], in part, states that the natural optimization problem
for Coxeter matroids is solved, for all appropriate \admissible" objective functions, by
the greedy algorithm applied to (S; B). This result is again a special case of Theorem 4.1.
The framework in this paper, however, is conceptually simpler than the usual approaches
to Coxeter matroids. The theory of greedoids, developed by Korte and Lovasz [7],
concerns a framework for optimization problems for which the greedy algorithm nds
the optimal for all generalized bottleneck functions. Since our results concern linear
objective functions, they do not subsume results on greedoids. Our results on the
greedy algorithm apply to situations not previously appearing in the literature, for
example the cyclic and bipartite cases mentioned in Sections 3 and 4. Concerning
open avenues of research, it would be interesting to formulate additional optimization
problems, analogous to the minimum spanning tree problem for matroids, to which the
theory is applicable.
The basic notion in this paper is that of a pair (S; P) consisting of a nite set S
together with a set P of partial orderings of S. The notions of greedy set and admissible
function are dened in Section 2 and examples are given in Sections 2 and 3. It is shown
in Section 4 that, for greedy sets, the greedy algorithm correctly solves the naturally
associated optimization problem for all admissible functions. Indeed, when P contains
only linear orders, the greedy sets are characterized by this property. It is also proved in
Section 4 that there is essentially no loss of generality in assuming that P contains only
linear orders. Our results naturally lead to the problem of eectively characterizing
greedy sets. A geometric approach via polytopes and a generalization of Lie algebra
root systems is taken in Section 5. It is proved that if every edge of the polytope of a
collection L is parallel to a root, then L is a greedy set.
Greedy sets and Admissible Functions
A partial ordering of a set S is a binary relation on S that is re
exive, transitive and
antisymmetric. If, for any a and b in S, either a b or b a then the partial ordering is
called a linear ordering of S. The notation a b will mean that a b but a 6= b.
Consider a pair (S; P) consisting of a nite set S and a collection P of partial
orderings of S. A subset L S will be called a greedy set for the pair (S; P) if L has a
maximum for every ordering in P. That is, for every ordering in P, there is an a 2 L
such that b a for all b 2 L.
denote the collection of all k-element subsets of S. Each partial order on
S induces a partial order on S k , namely the Gale order. If A; B 2 S k , then AB in the
Gale order if there are arrangements
of the elements of the two sets such that a i b i for all i. The set of Gale orderings
induced on S k by the orderings in P will be denoted P k . A greedy set for the pair
will be referred to as a rank k greedy set for (S; P).
A weight function f called compatible with a partial order on S if
f(a) < f(b) whenever a b. A weight function f is said to be admissible for (S; P) if f
is compatible with some partial order in P. An admissible weight function f for (S; P)
can be extended to an admissible weight function f dening
a2A
That this function is indeed admissible is the statement of the corollary below. The rst
proposition is obvious from the denitions of greedy set and admissible weight function.
Proposition 2.1 If L S is a greedy set for (S; P) and f is an admissible weight
function, then f attains a unique maximum on L.
Proposition 2.2 Let S be a partially ordered set and S k the set of all k-element subsets
of S with the Gale order. For any A; B 2 S k we have BA if and only if f(B) f(A)
for every weight function f compatible with the order on S.
Proof: Assume that BA. Then there are orderings
(b
Conversely assume that it is not the case that BA. We will construct a function
f that is compatible with the order on S but for which f(B) > f(A). For each element
a xag. Now BA in the Gale order if and only if there is
a set of distinct representatives of the sets B a ; a 2 A. Hence there is not such set of
representatives, and, by Philip Hall's theorem on distinct representatives, there must be
a set A 0 A such that g. Now dene
this function we have
It is easy to see that this function f can be perturbed slightly to be compatible with
the order on S and to retain the property that f(B) > f(A).
Remark. By the same reasoning as in the proof above, it is also true that, for any
if and only if f(B) < f(A) for every weight function f
compatible with the order on S.
Corollary 2.3 If f R is admissible for
This paper uses notation that is common in the literature. In particular, we use [n]
for the set f1; ng of the rst n positive integers. For readability, brackets may be
deleted in denoting a set; for example f2; 4; 6g may be denoted simply 246.
Example 2.4 Matroids are a special case.
let P be the set of all linear orderings of S. Then clearly every
injective weight function f : S ! R is admissible. By denition, a rank k greedy set L
is a collection of k-element subsets of S such that, for every linear ordering of [n], there
is a unique maximum in L in the induced Gale ordering on S k . But this is exactly the
third characterization of matroid given in the introduction. In other words, L is a rank
k greedy set for (S; P) if and only if L is the set of bases of a rank k matroid.
Example 2.5 Symplectic matroids. Let [n] and let
By convention i be the set of all linear orderings of S with the property
that i j if and only if j i for any i; j 2 S. Equivalently, the pair i and i appear
symmetrically in the order. For example, with one such order is 2 1 3
. Consequently the admissible weight functions include all injective weights
R such that f(i for each i 2 [n]. A symmetric matroid of Bouchet
[2] is exactly a rank n greedy set L for the pair (S; P) with the additional property that
for each A 2 L. More generally, the rank k greedy sets, 0 k n, satisfying
this same property are exactly the symplectic matroids of Borovik, Gelfand and White
[1]. The signicance of the added assumption will be discussed in the next section.
3 The Group Case
Let S be a partially ordered set and G a transitive group of permutations of S. If
denotes the order on S and 2 G, let denote the order dened by
a b if 1 a 1 b:
For rank k subsets, the corresponding Gale order will likewise be denoted A B. This
shifted order will be called the -order. Let be the set of all -orders on S
for 2 G. For example, if with the order 1 2 n, then P is the set of
all orders (1) (2) (n) where 2 G. The pair (S; P(G)) is referred to as
the group case.
Although G acts transitively on S, the induced action of G on S k may not be tran-
sitive. There will therefore be situations where attention will be restricted to a single
orbit O k of G acting on S k . The rank k greedy sets of (S; P(G)) will then be restricted
to being contained in a set O k on which G acts transitively.
Example 3.1 Matroids. If with the linear order
and n is the symmetric group consisting of all permutations of [n], then the greedy
sets for (S; P( n are exactly the ordinary matroids of Example 2.4.
Example 3.2 Symplectic matroids. Let with the linear order
and let G be the hyperoctahedral group of permutations of S generated by all transpositions
of the form (i i ) and all involutions of the form (i j)(i j ). Note that the set
of all k-element sets A with the property that A \ A single orbit of G
acting on S k ; call this orbit O k . Then the greedy sets L O k for (S; P(G)) are exactly
the symplectic matroids of Example 2.5 above.
Example 3.3 Cyclic case. Let be a poset with the order
and let G be the cyclic group acting on [n] and generated by the cycle (1 2 n). For
example, 3 4 1 2 is a cyclic ordering for 4. It is interesting that, even for
this elementary example, characterization of the collection of greedy sets is evasive. For
example, it is easy to check that the orbit of 652 under the action of the cyclic group C 7
acting on is a greedy set, but the orbit of 652 under the action of C 6 acting on
[6] is not a greedy set. (As noted earlier, the set f6; 5; 2g is denoted simply 652.)
Example 3.4 Bipartite case. Let consist of any linear
order such that either all the unstarred elements precede the starred elements or all the
starred elements precede the unstarred elements. This is the group case where, if [n]
and [n] are considered as vertex sets of the two parts of a complete bipartite graph
K n;n , then the group is the automorphism group of K n;n .
4 The Greedy Algorithm
As previously, S is a nite set and P a collection of partial orderings of S. The optimization
problem applied to the pair (S; P) is the following.
OPTIMIZATION PROBLEM
Given an admissible weight function f and a set L S k , nd an element
of L that maximizes the induced weight function f
Given L S k , call a subset I S independent with respect to L if I is a subset
of some element of L. The greedy algorithm, precisely as stated in the introduction,
applied to this optimization problem, merely chooses the largest weight element at each
stage subject to the condition that the resulting set is independent with respect to L.
The following result is basic.
Theorem 4.1 Let (S; P) be a pair consisting of a collection P of orderings of the nite
set S. If L S k is a rank k greedy set, then the greedy algorithm correctly solves the
optimization problem for every admissible weight function f
Proof: Assume that L is a greedy set and that f is compatible with some order, say , in
P. Since L is a greedy set, it contains a unique set A that is maximumwith respect to the
Gale order. We claim that the greedy algorithm selects this set A. Suppose, instead that
B is chosen where BA. Order the sets
that the elements of B appear in the order selected by the greedy algorithm and b i a i
for 1 i k. Assume that a
by the compatibility of f , that f(b j ) < f(a j ). But this contradicts the fact that, at this
stage, the greedy algorithm chooses b j .
In the case that P contains only linear orderings of S, the greedy sets are actually
characterized by the property that the greedy algorithm correctly solves the optimization
problem for every admissible weight function. The assumption that P contains only
linear orderings of S is a reasonable one in light of Theorem 4.3 below.
Theorem 4.2 Let P be a set of linear orderings of S and L S k . Then the greedy
algorithm correctly solves the optimization problem for every admissible weight function
if and only if L is a greedy set.
Proof: In one direction, this result is a corollary of Theorem 4.1. To prove it in the other
direction, assume that L is not a greedy set. Then there exists an order on S, say ,
for which L does not attain a unique maximum. Since is a linear ordering of S, order
the elements in each set in L in decreasing order. Let A denote the lexicographically
maximum element of L, and let B 6= A be a set in L that is a maximum with respect
to Gale order. According to Proposition 2.2 there exists a weight function compatible
with such that f(B) > f(A). On the other hand, the greedy algorithm chooses A.
It is desirable to choose P so that there are many admissible weight functions and
many rank k greedy sets. Then the greedy algorithm will correctly solve a large collection
of optimization problems. Unfortunately, these two objectives - many admissible
functions and many greedy sets - are often con
icting. If (S; P) and (S; Q) have the
same collection of admissible functions, but, for each k, each rank k greedy set for (S; P)
is a rank k greedy set for (S; Q), then clearly it is preferable, for algorithmic purposes,
to use (S; Q) rather than (S; P).
Theorem 4.3 For any pair (S; P) there is a pair (P; Q) such that
1. Q contains only linear orders;
2. (S; P) and (S; Q) have the same admissible injective functions; and
3. for each k, each rank k greedy set for (S; P) is also a rank k greedy set for (S; Q).
Proof: Let Q be the collection of all linear extensions of the orders in P. Clearly
condition (1) is satised. Concerning condition (2) assume that weight function f is
admissible for (S; Q). Then f is compatible with some linear ordering in Q and hence
is also compatible with any ordering in P having as a linear extension. Therefore
f is admissible for (S; P). Conversely assume that f is admissible for (S; P) and is
injective. Then f is compatible with some ordering in P. Assume that the elements
of of S are indexed such that f(x 1
by denition, the linear order x 1 x 2 x n is a linear extension of and f is
compatible with this linear order. Therefore f is admissible for (S; Q).
Concerning condition (3) assume that L is a rank k greedy set for (S; P). To show
that L must also be a greedy set for (S; Q), let be any linear order in Q. Then is
a linear extension of some order in P. Since L is greedy for (S; P), there is a unique
maximum set in L such that, for any
for all i for some ordering of the elements of A and B. Because is a linear extension
of , it is also true that b i a i for all i. Therefore A is also the unique maximum with
respect to the Gale order relative to . So L is a greedy set for (S; Q).
Remark. The assumption in condition (2), that the admissible functions be injective,
is not a serious restriction because, for any admissible function f , there is an injective
admissible function that is small perturbation of f .
The following examples are intentionally kept simple in order to illustrate the main
ideas.
Example 4.4 Symplectic matroids. This is a continuation of Example 2.5 of Section
2. Consider the case 3. It is not hard to check that the set
is a rank 2 greedy set. Consider, as an example, the particular function f : [3][[3] ! R
dened by
This is an admissible function because it is compatible with the ordering 1 2 3
. The greedy algorithm applied to L chooses the set 1 3 whose total
weight is 10, greater than that of any other set in L.
On the other hand, for the function
which is not admissible, the greedy algorithm again chooses the set 1 3 whose total
weight is 7, but the total weight of 12 is 9. The greedy algorithm fails in this case.
(Note that the collection L is not an ordinary matroid on the set [3] [ [3] .)
Example 4.5 Cyclic case. This is a continuation of Example 3.3 of Section 3.
Consider the case 4, and take the admissible weight function
2:
The set
is a rank 2 greedy set for (S; P). The greedy algorithm chooses 24 whose weight is 6,
greater than the other element of L. On the other hand, for the weight function
which is not admissible, the greedy algorithm fails for L.
Example 4.6 Bipartite case. This is a continuation of Example 3.4 of Section 3.
Consider the case
is a rank 2 greedy set. An admissible function is one for which either f(i) < f(j) for
every unstarred i and starred j or f(i) < f(j) for every starred i and unstarred j. As a
simple example, let
Then f is admissible, and the greedy algorithm chooses 1 2 which has total weight 7
compared to the total weight 3 of 12. On the other hand the function
is not admissible. The greedy algorithm chooses 1 2 which has total weight 6, although
12 has greater total weight 7.
5 Roots and Polytopes
In light of Theorems 4.1 and 4.2, it is important to have an e-cient method to determine
whether a collection L S k is a greedy set. If (S; P) is such that
of N linear orderings of S, then N may well be exponential as a function of n. If L is
a collection of k-element subsets of S, then it will take no less than exponential time
to check, using the denition, whether L is a greedy set. An alternative procedure is
sought that is polynomial in n and the cardinality of L. In the matroid case, any one of
many cryptomorphic denitions of matroid can be used to do this; that is one of the nice
properties of matroids. For ordinary and symplectic matroids, the associated matroid
polytope [10] furnishes an e-cient procedure to determine whether a set is greedy. For
the general case, we give a geometric approach (Theorem 5.7 and the remarks that follow
using polytopes and roots.
Because of Theorem 4.3, it will be assumed throughout this section that
and P contains only linear orders. Each linear order in P can be denoted by the
permutation for which 1 Given the pair (S; P), we will
associate a polytope (L) to each subset L S k as follows. Let be the
canonical orthonormal basis for n-dimensional Euclidean space R n . For any k-element
subset A of S, set
Let (L) be the convex hull of the points f-A j A 2 Lg. Notice that (L) lies in the
(n 1)-dimensional hyperplane in R n with equation
Dene a root of (S; P) as a non-zero vector
1.
2.
3. for every 2 P there exists an 2 f1; 1g such that
for each
In particular, note that i j is a root of any pair (S; P) for all i 6= j. Our denition
of root of (S; P) is meant as a generalization of a Lie algebra root system. The two
notions coincide for root systems of Coxeter groups. In the group case, we refer to a
root of (S; P(G)) as a root of G. The group acts on the set of roots. The roots in the
following examples are easy to compute.
Example 5.1 For either the symmetric group n or the alternating group A n acting
on [n], the roots are
Example 5.2 For the cyclic group Z n of Example 3.3 acting on [n], the roots are
f
In other words, +1 and 1 alternate in the vector r. For
example, with the vector (1; 0; is a root while (1; 1; 0; 1; 1) is not.
Example 5.3 For the hyperoctahedral group of Example 3.2 acting on [n] [ [n] the
roots are
For example, for the vector (1; 1; 0; 0; 1; 1) is a root.
Example 5.4 For the bipartite group of Example 3.4 acting on [n] [ [n] , let
. The roots are
The idea now is, given a set L, to nd a computationally e-cient algorithm, in
terms of its polytope (L), for deciding whether L is a greedy set. A vector
will be called -admissible for (S; P) if 0 < x 1 < x
x n . A vector that is -admissible for some 2 P will simply be called admissible.
Theorem 5.5 A subset L S k is a rank k greedy set for (S; P) if and only if, for each
admissible vector v, the linear function f v attains a maximum on (L) at
a unique vertex.
Proof: If
Notice that, for we have, by Proposition 2.2 and the remark
following it, that A B in the -Gale order if and only if f(A) < f(B) for all positive
weight functions f compatible with . This is equivalent to
i2B f(i) for
all positive weight functions such that 0 < f(1) < f(2) < < f(n). But this, in
turn, is the same as f v (- A
vectors
v. Thus the linear function f v attains a maximum on (L) at a unique
vertex for each admissible vector v if and only if there is a unique -Gale maximum in
L for each 2 P. But the latter condition is the denition of greedy set.
Lemma 5.6 Let r be a vector satisfying conditions (1) and (2) in the denition of root.
Then r is a root if and only if r ? contains no admissible vector.
Proof: Assume that orthogonal to some admissible vector
is admissible, there is a 2 P such that 0 < x 1 < x 2 <
We have
1g. Then
Assume, by way of contradiction, that
r is a root. Condition (3) in the denition of root implies (without loss of generality)
that there is a bijection from A onto B so that (i) > i for all i 2 A. But this implies
that
Conversely, assume that not a root. We use the same notation
A and B as above for some xed 2 P violating condition (3). Without loss of
generality we can ignore the 0 entries in r and assume that A [ [n] and that
A. Dene a bijection from A to B recursively as follows. Let (1) be the least
element of B. Having dened (i) for i < j, dene (j) as the least element of B
not already in the image of . Let
and r is not a root, C and
are nonempty. A -admissible vector now be chosen so that
arbitrary positive and negative values, respectively.
In particular, a -admissible vector v can be chosen so that hr;
Theorem 5.7 Let P be a collection of linear orderings of a nite set S and let L S k .
If every edge of (L) is parallel to a root, then L is a greedy set for (S; P).
Proof: Assume that L is not a greedy set. By Theorem 5.5 there exists an admissible
vector such that the linear function f v achieves its maximum on at
least two vertices of (L). Since (L) is convex, f v achieves its maximum on some
edge e of (L). Therefore he; e is not parallel to a root.
Unfortunately the converse of Theorem 5.7 is, in general, false. There exist greedy
sets L for which the polytope (L) has non-root edges. As an example, consider the
cyclic group Z 5 acting on the poset 5g. This is the group case
Example 3.3. If
then L is a greedy set: The (Gale) maximum is 35 for the order 12345; 15 for the order
for the order 34512; 23 for the order 45123; and 34 for the order 51234. It
is easy to check that the segment joining - 12 and - 34 is an edge e of (L) but that
is not a root because condition (3) in the denition fails.
Recall from Section 3 that in the group case it is natural to require that L be
contained in a single orbit O k S k under the action of G. This is, however, not the
case in the above example; L is not contained in a single orbit of S 2 under the action of
Z 5 . This motivates the following question.
Question 5.8 Let G denote a permutation group acting on S, and let O k be an orbit
of G acting on S k . Is it true that L O k is a rank k greedy set for (S; P(G)) if and
only if every edge of (L) is parallel to a root of G?
In certain cases the question can be answered in the a-rmative. An edge joining
vertices x and y of a polytope in R n will be called supporting if it is contained
in a supporting hyperplane of that is orthogonal to an admissible vector v with
According to Lemma 5.6, an edge of a polytope (L) not parallel to a
root must be orthogonal to some admissible vector; so it is possible for such an edge to
be supporting. Also according to Lemma 5.6, an edge that is parallel to a root cannot be
supporting. A set L S k is called supporting if each edge of (L) is either supporting
or parallel to a root.
Theorem 5.9 Let P be a collection of linear orderings of a nite set S such that P is
closed under the operation of taking the inverse (reversing order). Assume that L S k
is supporting. Then L is a greedy set for (S; P) if and only if every edge of (L) is
parallel to a root.
Proof: In one direction the statement follows directly from Theorem 5.7. Conversely,
assume that some edge e of (L) is not parallel to a root. Because L is supporting,
there is an admissible vector v such that e is contained in a supporting hyperplane
of (L) that is orthogonal to v, and the linear function f v
values at the two endpoints of e. This implies that f v (x) attains a maximum on (L)
at both endpoints of the edge e or a minimum at both endpoints of the edge e. By
Theorem 5.5, if f v (x) attains a maximum at both endpoints, then L is not a greedy
set. If f v (x) attains a minimum at both endpoints, say - A and - B , then, as in the proof
of Theorem 5.5, there is a 2 P such that A and B are both minimum in the -Gale
order. But a Gale minimum for is a Gale maximum for the inverse of , which is also
in P. Hence L is not a greedy set.
Let G be a permutation group acting on S. Then P(G) is closed under the operation
of taking the inverse. Let O k be an orbit under the induced action of G on S k . Sometimes
it is the case that any L O k is supporting. This is so, for example, if each vector
determined by a point on the boundary of (O k ) is either admissible or orthogonal to
a root. It can be shown that this is the case for ordinary and symplectic matroids. In
general, it is not the case that any L O k is supporting. Again consider the cyclic
group Z 5 acting on the poset f1; 2; 3; 4; 5g. If
then L lies on one orbit under the action of the cyclic group acting on the set of pairs
but the edge joining - 12 and - 34 in (L) is not a root and
is not supporting. For e to be supporting, there would have to exist an admissible
vector This would imply that either
In both cases, the vertices
lie on dierent sides of the hyperplane containing e and orthogonal to v. So
there does not exist a supporting hyperplane of (L) containing e which is orthogonal
to some admissible vector.
--R
Combinatorics 8
Mathematical Programming 38
Optimal assignments in an ordered set: an application of matroid theory
Matroids and the greedy algorithm
On a general de
Combinatorial geometries and torus strata on homogeneous compact manifolds
Networks and Matroids
A geometric characterization of Coxeter matroids
The greedy algorithm and Coxeter matroids
--TR
Combinatorial optimization: algorithms and complexity
Greedy algorithm and symmetric matroids
Symplectic Matroids
The Greedy Algorithm and Coxeter Matroids | coxeter matroid;greedy algorithm;matroid |
635251 | Algorithms for the fixed linear crossing number problem. | several heuristics and an exact branch-and-bound algorithm are described for the fixed linear crossing number problem (FLCNP). An experimental study comparing the heuristics on a large set of test graphs is given. FLCNP is similar to the 2-page book crossing number problem in which the vertices of a graph are optimally placed on a horizontal "node line" in the plane, each edge is drawn as an arc in one half-plane (page), and the objective is to minimize the number of edge crossings. In this restricted version of the problem, the order of the vertices along the node line is predetermined and fixed. FLCNP belongs to the class of NP-hard optimization problems (IEEE Trans. Comput. 39 (1) (1990) 124). The heuristics are tested and compared on a variety of graphs including some "real world" instances of interconnection networks proposed as models for parallel computing. The experimental results indicate that a heuristic based on the neural network model yields near-optimal solutions and outperforms the other heuristics. Also, experiments show the exact algorithm to be feasible for graphs with up to 50 edges, in general, although the quality of the initial upper bound is more critical to runing time than graph size. | Introduction
Recently, several linear graph layout problems have been the subject of study.
Given a set of vertices, the problem involves placing the vertices along a horizontal
\node line" in the plane and then adding edges as specied by the
interconnection pattern. The node line, or \spine", divides the plane into two
half-planes, also called \pages", corresponding to the two pages of an open
book. Some examples of linear layout problems are the bandwith problem [8],
the book thickness problem [2,29], the pagenumber problem [9,32], the boundary
VLSI layout problem [50], and the single-row routing problem [37]. For
Preprint submitted to Elsevier Science 22 August 2001
(a) (b)
Figure
1. Fixed linear embedding of the complete graph K 6
crossings; (b) L (K 6
surveys of linear layout problems, see [4,51]. Linear layout is important in several
applications, e.g., sorting with parallel stacks [49], fault-tolerant processor
array design [39], and VLSI design [9].
In this paper we study a restricted version of linear graph layout in which the
vertex order is predetermined and xed along the node line and each edge is
drawn as an arc in one of the two pages. The objective is to embed the edges
so that the total number of crossings is minimized (see Figure 1). We refer
to this as the xed linear crossing number problem (FLCNP) and denote the
minimum number of crossings by L (G) for a graph G.
FLCNP was shown to be NP-hard in [33]. Single-row routing with the restriction
that wires do not cross the node line is similar to FLCNP, although its
objective is to nd a layout (if any) with no crossings. FLCNP also appears as
a subproblem in communications network management graphics facilities such
as CNMgraf [21]. The problem is also of general interest in graph drawing and
graphical visualization systems where crossing minimization is an aesthetic
criterion used to measure the quality of a graph drawing [14,48].
A variant of the problem in which the vertex positions are not xed is studied
in [35] and a heuristic is given for its solution. A related parameter is the
book crossing number of a graph G, k (G), which is the minimum number of
crossings in a k-page embedding of G [2,29,42]. Note that vertex positions
are not xed and hence it is rst necessary to nd an optimal ordering of
vertices in order to determine k (G). The book crossing number problem is
closely related to the pagenumber problem. The pagenumber of a graph is the
minimum number of pages necessary to embed the edges of a graph (each
edge on one page) without crossings. It is known that outerplanar graphs
comprise the 1-page embeddable graphs [2], that subhamiltonian graphs, i.e.,
subgraphs of planar hamiltonian graphs, are precisely the 2-page embeddable
graphs [2], and that planar graphs are 4-page embeddable [52]. Nonplanar
graphs, however, require at least three pages [2]. A recent survey of the k-
pagenumber and general crossing number problem on various surfaces can be
found in [40]. Crossing minimization has also been studied for the case of two
l
Figure
2. Edge crossing condition i < j < k < l.
levels of vertices in [17] and [28], and for the general case in [15].
Let (G) denote the general planar crossing number of a graph G. In [43] it is
shown that 2 (G) (G). Observe that L (G) 2 (G), since the achievement
of minimum crossings is dependent on an optimal ordering of vertices on the
node line.
ng. A 2-page drawing of a graph is represented
by a pair of binary adjacency matrices A[ ] and B[ ]. For each edge ij, A[i; j]
(B[i; j]) is embedded in the upper (lower) page and 0, otherwise.
Then any pair of edges ik and jl cross in a drawing
and both lie in the same page (see Figure 2). Hence, the following formula
counts the number of crossings in a 2-page drawing D:
In this paper, we present eight dierent heuristics for FLCNP as well as a
branch-and-bound algorithm for nding exact solutions. We test the methods
on random graphs in addition to \real world" instances of graphs which model
some interconnection topologies proposed as architectures for parallel comput-
ing. Our results show that a heuristic based on the neural network model of
computation, which we simulate with a sequential algorithm, is a highly effective
method for computing L (G). The heuristic consistently outperforms
the other heuristics both in solution quality and running time. Furthermore,
for graphs with up to approximately 50 edges, the exact algorithm is a practical
choice, although its performance is highly dependent on the quality of
the initial upper bound value. Hence, the algorithms we present serve as useful
methods for computing L (G) and also for obtaining good linear 2-page
layouts of various networks. Also, since L (G) (G), they provide upper
bounds for the general planar crossing number of a graph G.
We begin by presenting some theoretical bounds, followed by a description of
the algorithms, and conclude with an experimental analysis.
Theoretical Bounds
We present some theoretical upper and lower bounds for L (G) which are used
in assessing the performance of the algorithms. Throughout our discussion we
assume good drawings of graphs, in which the following conditions hold:
(i) an edge does not cross itself,
(ii) edges with common endpoints do not cross,
(iii) any intersection of two edges is a crossing rather than tangential,
(iv) no three edges have a common crossing, and
(v) any pair of edges cross at most once.
It is a routine exercise to show, for any graph G, there is a good drawing of
G having the minimum number of crossings.
In [29], 1 (G) was dened as the
outerplanar crossing number but no results were given. However, in [42] the
following results are shown:
Theorem 1. 1 2.
Theorem 2. 1
for n 4; m 3n.
Theorem 3. k (G) m 3
27knAlso, the following result can be deduced:
Theorem 4. 1 (K n
Proof. This is equivalent to the problem of arranging the vertices of the
graph on the boundary of a circle and drawing the edges as chords. Then for
each 4-tuple of vertices (i; j; k; l) along the boundary, with labels satisfying
there is precisely one crossing caused by edges ik and jl.
Hence,B @ n1
C A gives the correct number of crossings. 2
The following result for K n was previously shown in [24] (see also [22,23]):
Theorem 5. 2 (K n
Actually, equality has been shown for in the above formula.
In [13], an alternate upper bound based on the adjacency matrix is given for
when drawn on k pages, and tables of results for dierent n and k
values are given. The results for with those of Theorem 5.
Theorem 6.
Proof. We construct a 2-page drawing of G with dm=2e edges in one page and
bm=2c in the other. Now, assuming in the worst case that each edge crosses
every other edge exactly once, we have at mostB @ dm=2e1
C A cross-
ings. Expanding this sum and using the identity dn=2e
at the given inequality. 2
Also, in [42] the following result and a greedy algorithm are given for constructing
a k-page drawing of G from a 1-page drawing with the indicated
number of crossings:
Theorem 7. k (G) 1 (G)=k.
3 The Heuristics
We developed and tested eight dierent heuristics. They are grouped into two
general categories | greedy and non-greedy. The two greedy heuristics dier
only in the order in which edges are added to the layout. Descriptions of the
heuristics are given in the following sections.
We assume that vertices are xed in the order 1; 2; :::; n along the node line.
As a pre-processing step to each algorithm, we remove all insignicant edges.
Observe that edges between consecutive vertices on the node line and the edge
1n cannot be involved in crossings according to the constraints of the problem.
Also, if there is a vertex k such that no edge ij, i < k < j, exists, then edges 1k
and kn cannot cause crossings. Hence, these edges are insignicant and may
be ignored without aecting the nal solution. At the same time, the problem
size is reduced so that larger instances can be solved. The output of each
heuristic is the minimum number of crossings obtained and the corresponding
embedding.
3.1 Greedy Heuristics
The Greedy heuristic adds edges to the layout in row-major order of the adjacency
matrix of the graph, that is, rst all edges 1i are added in increasing
order of i-value, then all edges 2i in increasing i-value order, etc. At each step,
an edge is embedded in the page (upper or lower) which results in the smallest
increase in the number of crossings. Ties are broken by placing the edge in
the upper page. Heuristic Gr-ran uses the same approach but adds edges in
random order.
3.2 Maximal Planar Heuristic
Heuristic Mplan nds a maximal planar subgraph in each page. In the rst
phase, edges are added in row-major order of the adjacency matrix to the
upper page. If an edge causes a crossing, it is put aside until the second phase.
In the second phase, all edges put aside in the rst phase are added to the
lower page. If an edge causes a crossing, it is once again put aside. In the third
phase, any edges put aside in the second phase are added to the page with the
smallest increase in crossings.
3.3 Edge-Length Heuristic
Heuristic E-len initially orders all edges non-increasingly by their \length",
i.e., ju vj for edge uv. The intuition here is that longer edges have a greater
potential for crossings than shorter edges and hence should be embedded rst.
Each edge is added one at a time to the page of smallest increase in crossings.
3.4 One-Page Heuristic
This is essentially the same method described in [42] and implied by Theorem
7 with 2. Heuristic 1-page initially embeds all edges in the upper page.
Figure
3. Fixed embeddings for base cases of dynamic programming heuristic.
This is followed by a \local improvement" phase in which each edge is moved
to the lower page if it results in fewer crossings. Edges are considered for
movement in order of non-increasing local crossing number, i.e., the number
of crossings involving an edge.
3.5 Dynamic Programming Heuristic
Unfortunately, FLCNP does not satisfy the principle of optimality which says
that in an optimal sequence of decisions each subsequence must also be opti-
mal. Subgraphs embedded optimally earlier in the process do not necessarily
lead to optimal embeddings of larger subgraphs when edges are added between
the smaller subgraphs later on. However, this does not preclude the potential
benet of a dynamic programming approach to the problem as a heuristic so-
lution. If most crossings are localized within relatively small subgraphs along
the node line for a given graph, a dynamic programming method may produce
a good solution.
Let G i::j denote the subgraph induced by consecutive vertices i::j along the
node line, and let cr[i; k; j] be the number of crossings in the subgraph
G i::k
is the set of \link edges" between G i::k and
G k+1::j . We compute cr[i; k; j] greedily by adding each link edge to the page
with the smallest increase in crossings. This leads to a recurrence for the number
of crossings, nc[1; n], computed by a dynamic programming solution:
The base cases for the algorithm are the subgraphs of order 2 4. Optimal
embeddings for these are predetermined and shown in Figure 3.
3.6 Bisection Heuristic
This heuristic uses a straightforward divide-and-conquer approach. The original
graph G 1::n is initially bisected into two smaller subgraphs G 1::b nc and
G b nc+1::n by temporarily removing the link edges between them. Each sub-graph
is then bisected recursively in the same manner until subgraphs of order
4 or less are obtained. Embeddings for these base cases are the same as those
shown in Figure 3. When combining smaller subgraphs, link edges between
the subgraphs are embedded in greedy fashion as before. A similar method
is described in [3], although the way in which edges are re-inserted into the
embedding after the bisection phase is not clearly specied.
3.7 Neural Network Heuristic
This heuristic is based on the neural network model of parallel computation
[27]. In this model there are a large number of simple processing elements
called We assume the McCulloch-Pitts binary neuron in which each
element has a binary state [34]. This model has also been used for the graph
planarization problem [47]. For testing purposes, a sequential simulator of
the actual parallel algorithm was used. The model uses 2m neurons for a
graph with m edges. With each edge is associated an \up" and a \down"
neuron, representing the two pages of the plane. Brie
y, two kinds of forces,
'excitatory' and `inhibitory', are present in the neural network. The presence
of an edge uv in a graph encourages the two neurons for the edge to re as the
excitatory force, while neurons of crossing edges are discouraged from ring
as the inhibitory force. At each iteration of the main processing loop, neuron
values are recalculated according to specied motion equations. Eventually,
after several iterations, either an 'up' or `down' neuron for each edge is in an
excitatory state and a nal embedding is obtained.
It is straightforward to simulate the parallel algorithm with a sequential algo-
rithm. Whereas, in the parallel algorithm, the output values of the neurons are
simultaneously updated outside of the motion equation loop, in the sequential
simulator, the output value of each neuron is individually computed in
sequence as soon as the input of the neuron is evaluated inside of the motion
equation loop. A drawback to neural network algorithms is the possibility of
non-convergence. Typically, a constant limit is imposed upon the number of
iterations of the motion equation computation loop, and the process is terminated
if convergence to the equilibrium state has not occurred by the limit.
Full details of the heuristic are given in [11].
Table
Time Complexities of the Heuristics.
Heuristic Time Complexity
Greedy O(m 2 )
Gr-ran O(m 2 )
Mplan O(m 2 )
1-page O(m 2 )
dynamic O(m 2
bisection O(n 4 )
neural O(m)
4 Time Complexities of the Heuristics
The time complexities of the heuristics and exact algorithm are given in Table
1. For the greedy, maximal planar, edge-length, and one-page heuristics, the
total time is dominated by the time to calculate the number of crossings
after each edge is added to the layout, which is O(n 4 ) if eq. (1) is directly
applied. Instead, however, we use a \dynamic" crossing recalculation method
which only checks for crossings involving the edge just added. This lowers the
recalculation time to O(m) for each edge added. For the dynamic programming
heuristic, there are a total of O(n 2 ) subgraphs to process, and each subgraph
requires O(m 2 ) time to add link edges and recalculate crossings. The time for
the bisection heuristic is given by the recurrence T
where O(n 4 ) is the time to merge each pair of subgraphs, and this recurrence
has the solution O(n 4 ).
The sequential simulator of the neural network heuristic has a main loop with
a number of iterations dependent on the rate of convergence of the system to a
stable state, which is not bounded by any function of the input size. However,
in experimental testing, the maximum number of loop iterations observed for
any test graph was 84. There are O(1) operations performed on each of the m
edge neurons per iteration. Hence, the time complexity is O(m).
5 An Exact Algorithm
The number of xed linear layouts of a graph is 2 m . Ignoring up to n insignificant
edges, this yields at most 2 n(n 3)=2 dierent layouts. Since any layout
has a \mirror image" (symmetric) drawing with the same number of crossings
obtained by switching the embeddings between the two pages, only one
half of this number need be checked, or 2 n(n 3)=2 1 . A branch-and-bound algorithm
was developed to nd optimal solutions by enumerating all possible
embeddings of edges subject to optimization bound conditions. Two bounding
conditions were applied to prune partial solution paths in the search tree. A
path (branch) of the tree is pruned if:
(1) the number of crossings in the partial solution exceeds the current global
upper bound, or
(2) the number of crossings in the partial solution plus the number of extra
crossings resulting from adding each remaining edge to the partial embeddding
greedily and independently of other remaining edges exceeds the current global
upper bound.
A backtracking algorithm was developed to enumerate the embeddings and
apply the bounding conditions. An initial global upper bound was obtained
from the best solution generated by the theoretical bounds and the heuristics.
As with the heuristics, the output of the algorithm is the number of crossings
obtained and the corresponding embedding.
6 Test Graphs
Several classes of test graphs were generated, and they are brie
y described
in the following sections. Since we were interested in obtaining good upper
bounds for the planar crossing number of families of graphs, we generated
several types of hamiltonian graphs. Many networks proposed as models of
parallel computer architectures are hamiltonian. For each graph G, the strategy
was to x the vertices along the node line in the order of a hamiltonian
cycle, if possible, since G may have a crossing-minimal drawing in which there
is a hamiltonian cycle which is not crossed by an edge. If the vertices are
then positioned along the node line in the given hamiltonian order, and the
edges are optimally drawn, then such a drawing would have (G) crossings.
However, as is discussed in [9], not all hamiltonian cycle orderings of vertices
correspond to an optimal vertex ordering, that is, one that leads to a linear
layout with a number of crossings equivalent to the planar crossing number
of the graph. To nd an optimal cycle, for example, for the d-dimensional hypercube
, as many as 2 d 3 d! cycles [41], in the worst case, would have to be
generated and tested, and this would be impractical for large d. Nevertheless,
by using a hamiltonian ordering of vertices on the node line, the likelihood of
computing the planar crossing number of the graph was increased.
6.1 Random Graphs
We used the traditional model G n;p of random graphs [6] formed by independently
including each edge of K n with probability
random graphs of order was made to nd a
hamiltonian cycle in any graph, due to the computational di-culty of this
problem. Hence, the vertices were simply positioned along the node line in the
order
6.2 Interconnection Network Graphs
Many interconnection topologies have been proposed for parallel computing
architectures. While optimal book embeddings have been investigated for several
of the networks (e.g., [9,26]), the xed linear crossing number has not to
our knowledge been investigated. Detailed descriptions of many of these networks
can be found in [30,53]. Here we provide only brief descriptions. All of
the network graphs generated are hamiltonian. Since hamiltonian cycles may
be easily found in these graphs, vertices were positioned along the node line in
hamiltonian order for the testing, with the hope of obtaining better approximations
to the planar crossing number of the graph, as discussed earlier.
6.2.1 Hypercubic Networks
The hypercube, Q d , of dimension d, is a d-regular graph with 2 d vertices and
Each vertex is labelled by a distinct d-bit binary string, and
two vertices are adjacent if they dier in exactly one bit. Q 3 is shown in Figure
4(a). Hypercubes of dimension
Several derivatives of the hypercube have also been proposed. These are generally
referred to as hypercubic networks. While the hypercube has unbounded
vertex degree, according to its dimension, most of the hypercubic networks
have constant degree bounds, usually 3 or 4, making them less dense with
increasing order.
The cube-connected-cycles [36], CCC d , of dimension d is formed from Q d by
replacing each vertex u with a d-cycle of vertices in CCC d and then joining
each cycle vertex to a cycle vertex of the corresponding neighbor of u in
Figure
4(b) shows CCC 3 . CCC d has d2 d vertices, 3d2 d 1 edges, and is
3-regular. The instances
The twisted cube [7], TQ d , has the same order, size, and regularity as Q d . TQ d
is formed by twisting one pair of edges in a shortest cycle (4-cycle) of Q d .
Figure
4(c) displays TQ 3 . TQ d of dimension
The crossed cube, CQ d , is dened in [18]. Like Q d , CQ d has 2 d vertices, d2 d 1
edges, and is d-regular. Figure 4(d) displays CQ 3 . CQ d of dimension
were generated.
The folded cube [19], FLQ d , is formed from Q d by adding the 2 d 1 extra
complementary edges fu;
ug for each vertex u where u is a d-bit binary string.
Figure
4(e) displays FLQ 3 . FLQ d of dimension
The hamming cube [16], HQ d , of dimension d, has 2 d vertices and (d+2)2 d 1 2
edges. HQ d has minimum degree d + 1, maximum degree 2d 1, and is a
supergraph of Q d . Figure 4(f) displays HQ 3 . HQ d of dimension
generated.
The binary de Bruijn graph [12], DB d , is a directed graph of 2 d vertices and
arcs. The vertices are labelled by the 2 d binary d-tuples. There is an arc
from vertex x 1 :::x d to vertex y 1 :::y d i x 2 :::x As a result, vertices
00:::0 and 11:::1 have self-loops. Undirected de Bruijn graphs [1], UDB d , are
formed from directed de Bruijn graphs by ignoring the orientations of the
edges and deleting the two self-loops, which are irrelevant in determining the
crossing number. UDB d has 2 d vertices, 2 d+1 2 edges, and maximum degree
4.
Figure
4(g) displays UDB 3 which is planar. UBD d of dimension
were generated.
The wrapped butter
y graph [30], WBF d , of dimension d has d2 d vertices and
d2 d+1 edges. The graph is 4-regular. Figure 5(f) displays WBF 3 . WBF d of
dimension
The shue-exchange graph [30] SX d , has 2 d vertices and 3 2 d 1 edges. uv
is an edge of SX d if either u and v, which are d-bit binary strings, dier in
precisely the last bit, or u is a left or right cyclic shift of v. For embedding
purposes, we ignore the two self-loop edges at vertices 00:::0 and 11:::1, which
results in a graph with 3 2 d 1 2 edges. Pendant vertices 00:::01 and 11:::10
may also be ignored to facilitate a hamiltonian vertex ordering along the node
line.
Figure
5(d) displays SX 3 . SX d of dimension
(a) (b) (c)
(d) (e)
Figure
4. Some interconnection networks: (a) hypercube
conected cycles CCC 3 ; (c) twisted cube TQ 3 ; (d) crossed cube CQ 3 ; (e) folded
cube hamming cube HQ 3 ; (g) undirected de Bruijn graph UDB 3 .
6.2.2 Other Networks
The d d torus, T d;d , is the graphical cross product of the cycles C d and C d .
Figure
5(a) displays T 4;4 . Torii T d;d , for
The star graph [45], ST d , has d! vertices labelled by all permutations of f1; 2; 3; :::; dg.
Two vertices are adjacent i the corresponding permutations dier only in the
rst and one other position. Hence, ST d has (d 1)d!=2 edges and is (d 1)-
regular. Figure 5(b) displays ST 4 . ST d for
The pancake graph [45], PK d , has d! vertices labelled by all permutations of
the elements dg. Two vertices are adjacent i one can be obtained
by
ipping the rst i elements of the other for some i 2. PK d has the same
order and size as ST d and is also (d 1)-regular. Figure 5(c) displays PK 4 .
PK d for
The pyramid graph [30], PM d , has d levels of vertices, with each level k, 0
k < d, having 4 d vertices, for a total of (4 d 1)=3 vertices. The interconnection
structure of PM 3 is shown in Figure 5(e). PM d for
(a) (c)
(d)
(b)
Figure
5. Additional interconnection networks: (a) torus T 4;4 ; (b) star graph
wrapped butter
6.3 Other Graph Families
Also included in the testing were complete graphs, K n , for
circulant graphs. The circulant graph [5], C n (a
a k < (n + 1)=2, is a regular hamiltonian graph with n vertices, with vertices
n) adjacent to each vertex i. C n (a
20::46 and various a i values were generated. C 8 (1; 2; 4) is shown in Figure
5(g). Vertices were placed along the node line in hamiltonian order for the
testing.
7 Experimental Results
All algorithms were implemented in the C language on a DEC AlphaServer
2100A 5/300 workstation with 300 MHz cpu speed and 512 megabytes of
RAM.
Test graph size was limited by the excessive time requirements of the bisection
and dynamic heuristics and the branch-and-bound algorithm. The largest
test graph contained 1016 edges. Due to the memory needed by the program
to store all of the subgraphs generated by bisection and dynamic, it was impossible
to test graphs larger than this with the current implementation and
hardware. The two greedy heuristics and heuristics Mplan, 1-page, Neural, and
E-len, on the other hand, can accommodate much larger graphs.
Results for the dierent classes of graphs are shown in Tables 2-5 and Figures
6-10.
Figure
11 shows a plot of heuristic performance on all 196 test graphs.
For complete graphs (Table 2), both Neural and E-len found optimal solutions
in all cases. The Greedy heuristic had the worst performance on these graphs.
For the hypercubic networks, as indicated by Table 3 and Figure 8, bisect and
dynamic performed poorly, while the remaining heuristics had much better
performance. In particular, neural found the optimal solution for 15 of the 39
test cases.
For the random graphs tested (see Figure 7), the edge density was approximately
the same ( 0:5) for all instances, with some minor variation due
to normal inconsistencies in the pseudorandom number generator. Here, we
dene the edge density of a graph to be m=jE(K n )j for an n-vertex, m-edge
graph.
In
Figure
11, we plotted the number of crossings versus the edge density for
all test graphs. It is interesting to observe that the number of crossings found
by all heuristics increases dramatically after 80 vertices, even though the edge
density is constant. It is likely that the xed vertex orderings along the node
line become more of a factor in heuristic performance as the number of vertices
increases. Due to the rather large scale of the gure, the plot is undetectable
for sparse graphs, until the density exceeds about 0:5. Any dierences in performance
between six of the heuristics (excluding bisect and dynamic) are still
hard to observe after that. Hence, to gain a better perspective of relative
performance, we also compared the heuristics according to a ranking scheme,
where the rank of a heuristic A, was dened to be k, 1 k 8, if A obtained
the k th best solution among the eight heuristics for a given instance. Table 6
shows the overall rankings on all test graphs. Heuristic Neural either had or
tied for the best average rank in 15 of the 17 classes, and also had the best
composite average rank (1:35) of the eight heuristics for all test graphs.
At the other extreme, bisect exhibited the poorest overall performance, nish-
ing last or tied for last in average rank on all graph classes. Its performance
can probably be improved by a more clever method for adding link edges between
subgraphs after the bisection process. The same applies to the dynamic
programming heuristic, although its performance was signicantly better than
bisect.
It is also interesting to examine the degree of optimality of the heuristics in
cases where the optimal solution is known. This is summarized in Table 7.
Optimal solutions were obtained for 95 of the 196 test graphs. Neural found
the optimal solution in 75 (79%) of these cases. Its solution deviated from the
optimal by a total of 38 crossings in the remaining 20 cases, for an average
deviation of 1:9 crossings. The maximum deviation of 6 crossings by Neural
occurred on the circulant graph C 24 (1; 3; 5). For the classes of complete graphs,
torii, de Bruijn, and hypercubic graphs (except hypercubes), Neural found the
optimal or conjectured optimal solution for all test cases with known solutions.
The random greedy heuristic nished second in overall performance with an
average deviation of 1:9 crossings from the optimal solution on 95 graphs. The
standard greedy heuristic, on the other hand, was second to last in perfor-
mance. Hence, the advantage of random edge selection is apparent.
In the tables, the column headings are abbreviated as follows:
Opt.: optimal solution from branch-and-bound algorithm
Greedy: greedy heuristic
Gr-ran: random greedy heuristic
Mplan: maximal planar heuristic
E-len: edge length heuristic
1-page: one-page heuristic
Dyn: dynamic programming heuristic
Bisect: bisection heuristic
Neural: neural network heuristic
Within the Opt. column, in cases where an optimal solution was not obtainable
due to problem size, a pair of values LB:UB indicates the best known theoretical
lower and upper bounds. Except where noted, bounds were obtained
from Theorems 3-7.
We note that for the hypercube, Q 5 , the optimal solution obtained by the
exact algorithm contained 60 crossings. However, a drawing is given in [31]
with only 56 crossings. Hence, the particular hamiltonian cycle used here was
Results for Complete Graphs.
Number of Crossings Found by Heuristic
Graph Opt: Greedy Gr-ran Mplan E-len 1-page Dyn Bisect Neural
K8
optimal value
conjectured optimal value
not an optimal vertex ordering necessary to achieve the minimum. The same
can be noted for the solutions obtained for the torii T 5;5 and T 7;7 (20 and
48, resp.), which are higher than the known optimal values of 15 and 35,
respectively.
For heuristics Neural and Gr-ran, the best solution obtained from 10 trials per
instance is indicated. As a practical limit, Neural was allowed to iterate up to
5000 times per instance before non-convergence was assumed. However, the
maximum number of iterations observed for any test graph was 84. Moreover,
graph size had no observable eect on the number of iterations.
A comparison of running times for the heuristics on a sampling of 48 graphs
is shown in Figure 12. The times for E-len, Mplan, and 1-page are dwarfed by
those of the dynamic programming heuristic and thus are barely discernible
along the x-axis in the plot. Running times for the bisection heuristic were
somewhat longer than these but still much shorter than the dynamic programming
heuristic.
Figure
12 shows the running times of the heuristics for a representative sampling
of the test cases selected from each of the dierent classes of graphs.
The sharp spikes in the plots for the dynamic and bisection heuristics may
be attributed to instances with a large number of vertices, since this has a
dramatic eect on the depth of recursion and resulting cpu overhead for both
heuristics. The cpu times for the remaining heuristics were all comparatively
fast. In particular, the neural network heuristic required at most 84 iterations
on any instance and ran noticeably faster than the other heuristics in most
cases.
The cpu times for the exact algorithm are shown in Table 8 on a sampling of 17
graphs. The table includes the percentage of the total search space explored. In
Results for Hypercubic Networks.
Number of Crossings Found by Heuristic
Graph n m Opt: Greedy Gr-ran Mplan E-len 1-page Dyn Bisect Neural
28 34 28 29 29 29 29 34 28
single value indicates optimal solution; pair of values indicates theoretical lower and upper bounds
1 lower bound d(d 1)2 d[31]
2 upper bound 1654 d(2d
4 upper bound 34 d3 2 d d2 d [10]
number
of
crossings
number of vertices
Greedy
Gr-ran
Mplan
E-len
1-page
Dyn
Bisect
Neural
Figure
6. Heuristic results for complete graphs.
general, exact solutions were feasible only for graphs with up to approximately
50 signicant edges. However, the quality of the initial upper bound is also
very critical to the running time. For example, we were able to process the
cube-connected cycles CCC 4 , with 96 edges, in only 3 cpu seconds due to the
optimality of the heuristic solution (and initial upper bound).
8 Conclusion and Remarks
We have presented several heuristics and an exact algorithm for computing
the xed linear crossing number of a graph. An experimental analysis of their
performance on a variety of test graphs has been given. Our main conclusion
is that a heuristic based on the neural network model of computation is a
highly eective method for solving the problem, giving near-optimal solutions
in most cases and consistently better solutions than other popular heuristics
in other cases. An exact algorithm has been shown eective for graphs with up
to 50 signicant edges, in general, although it can handle much larger graphs
number
of
crossings
number of vertices
Greedy
Gr-ran
Mplan
E-len
1-page
Dyn
Bisect
Neural
Figure
7. Heuristic results for 100 random graphs.
if the initial upper bound is fairly tight.
The algorithms are useful in providing upper bounds to the book crossing
number and planar crossing number of a graph as well as for nding crossing-
minimal 2-page layouts of parallel interconnection networks.
In future work, we plan to study the worst-case performance of the heuristics
and to investigate their adaptation to the unxed linear crossing number
problem. Since this requires nding an optimal vertex ordering on the node
line, the problem complexity is greater than that of FLCNP.
Also, recently some other algorithms have been brought to the attention of
the author, i.e., [46,38], and these may be included along with the present set
of algorithms in future experiments.
number
of
crossings
number of edges
Greedy
Gr-ran
Mplan
E-len
1-page
Dyn
Bisect
Neural
Figure
8. Heuristic results for hypercubic networks.
--R
a competitor for the hypercube?
The book thickness of a graph
A framework for solving VLSI graph layout problems
Embedding graphs in books: a survey
Circulants and their connectivities
A new variation on hypercubes with smaller diameter
Embedding graphs in books: a layout problem with applications to VLSI design
Topological properties of some interconnection network graphs
A neural network algorithm for a graph layout problem
An upper bound to the crossing number of the complete graph drawn on the pages of a book
Algorithms for drawing graphs: an annotated bibliography
An experimental comparison of four graph drawing algorithms
A theoretical network model and the Hamming cube networks
Heuristics for reducing crossings in 2-layered networks
The crossed cube architecture for parallel computation
On the Eggleton and Guy conjectured upper bound for the crossing number of the n-cube
Crossing number of graphs
Latest results on crossing numbers
The toroidal crossing number of the complete graph
Toroidal graphs with arbitrarily high crossing numbers
Kautz and shu
The book thickness of a graph
Introduction to Parallel Algorithms and Architectures: Arrays
Bounds for the crossing number of the n-cube
On the page number of graphs
Crossing minimization in linear embeddings of graphs
A logical calculus of ideas imminent in nervous activity
Permutation procedure for minimising the number of crossings in a network
The cube-connected cycles: a versatile network for parallel computation
IEEE Trans.
Using simulated annealing to
The DIOGENES approach to testable fault-tolerant arrays of processors
On some topological properties of hypercubes
A parallel stochastic optimization algorithm for
Automatic graph drawing and readability of diagrams
Sorting using networks of queues and stacks
Computational aspects of VLSI
Linear and book embeddings of graphs
Four pages are necessary and su-cient for planar graphs
Parallel and Distributed Computing Handbook
--TR
Four pages are necessary and sufficient for planar graphs
Linear and book embeddings of graphs
Embedding graphs in books: a layout problem with applications to VLSI design
Automatic graph drawing and readability of diagrams
Crossing Minimization in Linear Embeddings of Graphs
Bounds for the crossing number of the <italic>N</italic>-cube
Introduction to parallel algorithms and architectures
A new variation on hypercubes with smaller diameter
Graphs with <?Pub Fmt italic>E<?Pub Fmt /italic> edges have pagenumber<inline-equation> <f> <fen lp="par"><rad><rcd><i>E</i></rcd></rad><rp post="par"></fen> </f> </inline-equation> <?Pub Fmt italic>O<?Pub Fmt /italic>
On VLSI layouts of the star graph and related networks
Algorithms for drawing graphs
Parallel and distributed computing handbook
The book crossing number of a graph
Embedding de Bruijn, Kautz and shuffle-exchange networks in books
An experimental comparison of four graph drawing algorithms
CNMGRAFMYAMPERSANDmdash;graphic presentation services for network management
Sorting Using Networks of Queues and Stacks
The cube-connected cycles: a versatile network for parallel computation
Properties and Performance of Folded Hypercubes
The Crossed Cube Architecture for Parallel Computation
A Theoretical Network Model and the Hamming Cube Networks
Book Embeddings and Crossing Numbers | linear layout;neural network;crossing number;branch-and-bound;heuristic |
635468 | Some recent advances in validated methods for IVPs for ODEs. | Compared to standard numerical methods for initial value problems (IVPs) for ordinary differential equations (ODEs), validated methods (often called interval methods) for IVPs for ODEs have two important advantages: if they return a solution to a problem, then (1) the problem is guaranteed to have a unique solution, and (2) an enclosure of the true solution is produced.We present a brief overview of interval Taylor series (ITS) methods for IVPs for ODEs and discuss some recent advances in the theory of validated methods for IVPs for ODEs. In particular, we discuss an interval Hermite-Obreschkoff (IHO) scheme for computing rigorous bounds on the solution of an IVP for an ODE, the stability of ITS and IHO methods, and a new perspective on the wrapping effect, where we interpret the problem of reducing the wrapping effect as one of finding a more stable scheme for advancing the solution. | Preprint submitted to Elsevier Preprint 20 November 2000
is D. The condition (2) permits the initial value
to be in an interval, rather than specifying a particular value. We assume
that the representation of f contains a nite number of constants, variables,
elementary operations, and standard functions. Since we assume f 2 C k 1 (D),
we exclude functions that contain, for example, branches, abs, or min. For
expositional convenience, we consider only autonomous systems. This is not
a restriction of consequence, since a nonautonomous system of ODEs can be
converted into an autonomous one. Moreover, the methods discussed here can
be extended easily to nonautonomous systems.
We consider a grid t 0 < which is not necessarily equally spaced,
and denote the stepsize from t j 1 to t j by h . The step from
is referred to as the jth step. We denote the solution of (1) with an
initial condition For an interval, or
an interval vector [y j 1 ], we denote by y(t; t the set of solutions
Our goal is to compute interval vectors [y j m, that are guaranteed
to contain the solution of (1{2) at
Standard numerical methods for IVPs for ODEs attempt to compute an approximate
solution that satises a user-specied tolerance. These methods are
usually robust and reliable for most applications, but it is possible to nd
examples for which they return inaccurate results. On the other hand, if a validated
method for IVPs for ODEs returns successfully, it not only produces a
guaranteed bound for the true solution, but also veries that a unique solution
to the problem y exists for all y
There are situations when guaranteed bounds are desired or needed. For ex-
ample, a guaranteed bound on the solution could be used to prove a theorem
[28]. Also, some calculations may be critical to the safety or reliability of a
system. Therefore, it may be necessary or desirable to ensure that the true
solution is within the computed bounds.
One reason validated solutions to IVPs for ODEs have not been popular in
the past is that their computation typically requires considerably more time
and memory than does that of standard methods. However, now that \chips
are cheap", it seems natural to shift the burden of determining the reliability
of a numerical solution from the user to the computer | at least for standard
problems that do not require a very large amount of computer resources.
In addition, there are situations where interval methods for IVPs for ODEs
may not be computationally more expensive than standard methods. For ex-
ample, many ODEs arising in practical applications contain parameters. Often
these parameters cannot be measured exactly, but are known to lie in certain
intervals, as, for example, in economic models or in control problems. In these
situations, a user might want to compute solutions for ranges of parameters. If
a standard numerical method is used, it has to be executed many times with
dierent parameters, while an interval method can \capture" all the solutions
at essentially no extra cost.
Important developments in the area of validated solutions of IVPs for ODEs
are the interval methods of Moore [19], Kruckeberg [13], Eijgenraam [8], and
Lohner [17]. All these methods are based on Taylor series. One reason for the
popularity of this approach is the simple form of the error term. In addition,
the Taylor series coe-cients can be readily generated by automatic dierenti-
ation, and both the order of the method and its stepsize can be changed easily
from step to step.
Usually, validated methods for IVPs for ODEs are one-step methods, where
each step consists of two phases:
Algorithm I: validate existence and uniqueness of the solution with some
stepsize, and
Algorithm II: compute a tight enclosure for the solution.
The main di-culty in Algorithm I is how to validate existence and uniqueness
of the solution with a stepsize that corresponds to the order of Algorithm
II. The main obstacle in Algorithm II is how to reduce the so-called
wrapping eect, which arises when the solution set, which is not generally a
box, is enclosed in | or wrapped by | a box on each integration step, thus
introducing overestimations in the computed bounds. Currently, Lohner's QR-
factorization method is the most-eective standard scheme for reducing the
wrapping eect.
The methods considered in this paper are based on high-order Taylor series
expansions with respect to time. Recently, Berz and Makino [4] proposed a
method for reducing the wrapping eect that employs high-order Taylor series
expansions in both time and the initial conditions. Berz and Makino's scheme
validates existence and uniqueness and also computes tight bounds on the
solution at each grid point t j in one phase, while the methods considered here
separate these tasks into two phases, as noted above.
The purpose of this paper is to present a brief overview of interval Taylor
series (ITS) methods for IVPs for ODEs and to discuss some recent advances
in the area of validated ODE solving. In particular, we discuss
an interval Hermite-Obreschko (IHO) scheme for computing tight enclosures
on the solution [20, 21];
instability in interval methods for IVPs for ODEs due to the associated
formula for the truncation error [20, 21], which appears to make it di-cult
to derive eective validated methods for sti problems; and
a new perspective on the wrapping eect [22], where we view the problem
of reducing the wrapping eect as one of nding a more stable scheme for
advancing the solution.
Section 2 introduces interval-arithmetic operations, enclosing ranges of func-
tions, and automatic generation of Taylor series coe-cients. In Section 3, we
brie
y discuss methods for validating existence and uniqueness of the solution,
explain the wrapping eect, and describe Lohner's QR-factorization method
for reducing the wrapping eect.
In Section 4, we outline the IHO method. Section 5 shows that the stability of
the ITS and the IHO methods depends not only on the formula for advancing
the solution, as in point methods for IVPs for ODEs, but also on the associated
formula for the truncation error. Section 6 explains how the wrapping eect
can be viewed as a source of instability in interval methods for IVPs for ODEs
and how Lohner's QR-factorization scheme improves the stability of an interval
method.
The set of intervals on the real line R is dened by
a
If a
a then [a] is a point interval; if a
0 then [a] is nonnegative ([a] 0);
and if a
then [a] is symmetric. Two intervals [a] and [b] are equal if a
and a = b.
Let [a] and [b] 2 IR, and - 2 =g. The interval-arithmetic operations
are dened [19] by
which can be written in the following equivalent form (we omit in the nota-
a
a
b;
a b
min
a
b; ab
a
b; ab
1= b; 1=b
The strength of interval arithmetic when implemented on a computer is in
computing rigorous enclosures of real operations by including rounding errors
in the computed bounds. To include such errors, we round the real intervals
in (4{7) outwards, thus obtaining machine interval arithmetic. For example,
when adding intervals, we round a
+b down and round a+ b up. For simplicity
of the discussion, we assume real interval arithmetic in this paper. Because
of the outward roundings, intervals computed in machine interval arithmetic
always contain the corresponding real intervals.
Denition (3) and formulas (4{6) can be extended to vectors and matrices. If
the components of a vector or matrix are intervals, we have interval vectors
and matrices, respectively. The arithmetic operations involving interval vectors
and matrices are dened by the standard formulas, except that reals are
replaced by intervals and real arithmetic is replaced by interval arithmetic in
the associated computations.
We have an inclusion of intervals
[a] [b] () a
and a b:
The interval-arithmetic operations are inclusion monotone. That is, for real
intervals [a], [a 1 ], [b], and [b 1 ] such that [a] [a 1 ] and [b] [b 1 ],
Although interval addition and multiplication are associative, the distributive
law does not hold in general [1]. That is, we can easily nd three intervals [a],
[b], and [c], for which
However, for any three intervals [a], [b], and [c], the subdistributive law
does hold. Moreover, there are important cases in which the distributive law
does hold. For example, it holds if [b] [c] 0, if [a] is a point interval, or if [b]
and [c] are symmetric. In particular, for 2 R, which can be interpreted as
the point interval [; ], and intervals [b] and [c], we have
For an interval [a], we dene width and midpoint of [a] as
a a
and
respectively [19]. Width and midpoint are dened component-wise for interval
vectors and matrices.
Using (4{6) and (9), one can easily show that
for any [a], [b] 2 IR and 2 R.
The equalities (11) and (12) also hold when [a] and [b] are interval vectors.
If A is an n n real matrix and [a] is an n dimensional interval vector, then
from (11{12),
where jAj is obtained by taking absolute values on each component of A.
2.1 Ranges of Functions
R be a function on D R n . The interval-arithmetic evaluation
of f on [a] D, which we denote by f ([a]), is obtained by replacing each
occurrence of a real variable with a corresponding interval, by replacing the
standard functions with enclosures of their ranges, and by performing interval-
arithmetic operations instead of the real operations. From (8), the range of f ,
g, is always contained in f ([a]). Although f ([a]) is not unique,
since expressions that are mathematically equivalent for scalars, such as x(y
z) and xy may have dierent values if x, y, and z are intervals, a value
for f ([a]) can be determined from the code list, or computational graph, for
f . This implies that, if we rearrange an interval expression, we may obtain
tighter bounds.
continuously dierentiable on D R n and [a] D, then,
for any y and b 2 [a], some 2 [a] by the
mean-value theorem. Thus,
[19]. The mean-value form, f M ([a]; b), is popular in interval methods since
it often gives tighter enclosures for the range of f than the straightforward
interval-arithmetic evaluation of f itself. Like f , f M is not uniquely dened,
but a value for it can be determined from the code lists for f and f 0 .
2.2 Automatic Generation of Taylor Coe-cients
Since the interval methods considered here use Taylor series coe-cients, we
outline the scheme for their generation. More details can be found in [3] or
[19], for example.
If we know the Taylor coe-cients (u) i u (i) =i! and (v) i v (i) =i! for
for two functions u and v, then we can compute the pth Taylor
coe-cient of u v, uv, and u=v by standard calculus formulas [19].
We introduce the sequence of functions
@y
f
For the IVP y the ith Taylor coe-cient of y(t) at t j ,
where
is the (i 1)st Taylor coe-cient of f evaluated at y j . Using
and formulas for the Taylor coe-cients of sums, products, quotients, and
the standard functions, we can recursively evaluate (y j 1. It can be
shown that, to generate k Taylor coe-cients, we need at most O(k 2 ) times as
much computational work as we require to evaluate f(y) [19]. Note that this is
far more e-cient than the standard symbolic generation of Taylor coe-cients.
If we have a procedure to compute the point Taylor coe-cients of y(t) and
perform the computations in interval arithmetic with [y j ] instead of y j , we
obtain a procedure to compute the interval Taylor coe-cients of y(t) at t j .
3 Overview of Interval Methods for IVPs for ODEs
3.1 Algorithm I: Validating Existence and Uniqueness of the Solution
Using the Picard-Lindelof operator and the Banach xed-point theorem, one
can show that if h j 1 and [~y 0
then (1) with has a unique solution y(t;
We refer to a method based on (16) as a rst-order enclosure method. Such a
method can be easily implemented, but a serious disadvantage of this approach
is that it often restricts the stepsize Algorithm II could take.
One can obtain methods that enable larger stepsizes by using polynomial
enclosures [18] or more Taylor series terms in the sum in (16), thus obtaining
a high-order Taylor series enclosure method [6, 24]. In the latter, we determine
holds for all
has a unique solution y(t;
for all
In [24], we show that, for many problems, an interval method based on (17)
is more e-cient than one based on (16).
3.2 Algorithm II: Computing a Tight Enclosure of the Solution
Consider the Taylor series expansion
its lth component
n) evaluated at Using the a
priori bounds [~y we can enclose the local truncation
error of the Taylor series (18) on
We can also bound the ith Taylor coe-cient f [i] (y
Therefore,
contains However, as explained below, (19) illustrates how replacing
real numbers in an algorithm by intervals often leads to large overestimations
Taking widths on both sides of (19) and using (11), we obtain
w([y
which implies that the width of [y j ] almost always increases 2 with j, even if
the true solution contracts.
A better approach is to apply the mean-value theorem to f [i] in (18), obtaining
for any ^
I
An equality is possible only in the trivial cases h
for all
where J
is the Jacobian of f [i] with its lth row evaluated at
This formula is
the basis for the ITS methods of Moore [19], Eigenraam [8], Lohner [17], and
Rihm [27]; see also [23].
Let
and
(The ith Jacobian can be computed by generating the Taylor coe-cient
dierentiating it [2, 3]. Alternatively, these Jacobians
can be computed by generating the Taylor coe-cients for the associated variational
equation [17].)
Using (21), we can rewrite (20) as
We refer to a method implementing (22) as the direct method.
If we compute enclosures with (22), the widths of the computed intervals may
decrease. However, this approach frequently works poorly, because the interval
vector
signicantly overestimate the set
Such overestimations accumulate as the integration proceeds. This is often
called the wrapping eect.
In
Figure
1, we illustrate the wrapping of the parallelepiped f Ax j x 2 [x] g
by the box A[x], where
A =B @ 1 2
Fig. 1. The wrapping of the parallelepiped f Ax j x 2 [x] g by the box A[x].
3.2.1 The Wrapping Eect
The wrapping eect is clearly illustrated by Moore's example [19],
The interval vector [y 0 ] can be viewed as a box in the (y 1
the true solution of (23) is the rotated box shown in Figure 2. If we want to
Fig. 2. The rotated box is wrapped at
enclose this box in an interval vector, we have to wrap it by another box with
sides parallel to the y 1 and y 2 axes. On the next step, this larger box is rotated
and so must be enclosed in a still larger one. Thus, at each step, the enclosing
boxes become larger and larger, but the true solution set is a rotated box of
the same size as [y 0 ]. Moore showed that at 2, the interval inclusion is
in
ated by a factor of e 2 535 as the stepsize approaches zero [19].
3.2.2 Lohner's Method
Here, we describe Lohner's QR-factorization method [16, 17], which is one
of the most successful, general-purpose methods for reducing the wrapping
eect.
Let
(j 1), where I is the identity matrix. From (20{21) and (24{25), we compute
and propagate for the next step the interval vector
where A j 2 R nn is nonsingular for
In Lohner's QR-factorization method,
where Q j is orthogonal and R j is upper triangular. Other choices for A j are
discussed in [16, 17, 20, 22].
One explanation why this method is successful at reducing the wrapping eect
is that we enclose the solution on each step in a moving orthogonal coordinate
system that \matches" the solution set; for more details, see [16, 17, 20, 23].
In section 6, we show that this choice for A j ensures better stability of the
QR-factorization method, compared to the direct method.
4 An Interval-Hermite Obreschko Method
For more than Taylor series has been the only eective approach for
computing rigorous bounds on the solution of an IVP for an ODE. Recently,
we developed a new scheme, an interval Hermite-Obreschko (IHO) method
[20, 21]. Here, we outline the method and its potential.
Let
c q;p
(q; p, and i 0), y It can be shown
that
c p;q
[20, 31]. The Hermite-Obreschko formula (29) is the basis of our IHO meth-
od. As in an ITS method, we can easily bound the local truncation error in
the IHO method, since we can readily generate the interval Taylor coe-cient
The method we propose in [20] consists of two phases, which can be considered
as a predictor and a corrector. The predictor computes an enclosure [y (0)
of
the solution at t j , and using this enclosure, the corrector computes a tighter
enclosure
j ] at t j . If q > 0, (29) is an implicit scheme. The corrector
applies a Newton-like step to tighten [y (0)
We have shown in [20, 21] that for the same order and stepsize, our IHO
method has smaller local error, better stability, and requires fewer Jacobian
evaluations than an ITS method. The extra cost of the Newton step is one
matrix inversion and a few matrix multiplications.
5 Instability from the Formula for the Truncation Error
In this section, we investigate the stability of the ITS and IHO methods, when
applied with a constant stepsize and order to the test problem
where and y 0 2 R, and < 0. Since we have not dened complex interval
arithmetic, we do not consider problems with complex. Note also that the
wrapping eect does not occur in one-dimensional problems.
For the remainder of this paper, we consider ITS methods with a constant
stepsize and order k truncation error and in this section, we consider the IHO
scheme with a constant stepsize and order
5.1 The Interval Taylor Series Method
Suppose that at t j 1 > 0, we have computed a tight enclosure [y ITS
of the
solution of (30) with an ITS method, and [~y ITS
is an a priori enclosure of the
solution on
Let
r
Using (31), an ITS method for computing tight enclosures of the solution to
(30) can be written as
[y ITS
[~y ITS
cf. (20). Since [~y ITS
we obtain from (31{32)
that
w([y ITS
w([y ITS
(Note that w([y ITS
Therefore, if
the ITS method given by (32) is asymptotically unstable, in the sense that
lim j!1 w([y ITS
This result implies that we have restrictions on the
stepsize not only from the function T k 1 (h), as in point methods for IVPs
for ODEs, but also from the factor jhj k =k! in the remainder term.
5.2 The Interval Hermite-Obreschko Method
we assume that [y IHO
computed with an IHO
method, and [y IHO
. The formula (29) reduces to
c p;q
are dened in (28). Dene
R p;q
and Q p;q
c q;p
Also let [~y IHO
be an a priori enclosure of the solution on
From (34{35), we compute an enclosure [y IHO
[y IHO
[~y IHO
where
w([y IHO
w([y IHO
Therefore, the IHO method is asymptotically unstable in the sense that
lim j!1 w([y IHO
In (33) and (36),
are approximations to e z of the same order. In particular, R p;q (z) is the Pade
rational approximation to e z (see for example [26]). For the ITS method,
only if h is in the nite stability region of T k 1 (z).
However, for the IHO method with 0 > 2 R, jR p;q (h)j < 1 for any h > 0 if
Roughly speaking, the
stepsize in the ITS method is restricted by both
while in the IHO method, the stepsize is limited mainly by
In the latter case,
p;q =Q p;q (h) is usually much smaller than one; thus, the
stepsize limit for the IHO method is usually much larger than for the ITS
method.
An important point to note here is that an interval version of a standard
numerical method, such as the Hermite-Obreschko formula (29), that is suitable
for sti problems may still have a restriction on the stepsize. To obtain an
interval method without a stepsize restriction, we must nd a stable formula
not only for the propagated error, but also for the associated truncation error.
6 A New Perspective on the Wrapping Eect
The problem of reducing the wrapping eect has usually been studied from
a geometric perspective as nding an enclosing set that introduces as little
overestimation of the enclosed set as possible. For example, parallelepipeds
[8, 13, 17, 19], ellipsoids [11, 12, 25], convex polygons [29], and zonotopes [14]
have been employed to reduce the wrapping eect.
In [22], we linked the wrapping eect to the stability of an ITS method for
IVPs for ODEs and interpreted the problem of reducing the wrapping eect as
one of nding a more stable scheme for advancing the solution. This allowed
us to study the stability of several ITS methods (and thereby the wrapping
eect) by employing eigenvalue techniques, which have proven so useful in
studying the stability of point methods.
In this section, we explain rst how the wrapping eect can cause instability
in the direct method and then show that Lohner's QR-factorization method
provides a more stable scheme for propagating the error.
6.1 The Wrapping Eect as a Source of Instability
Consider the IVP
2.
applied with a constant stepsize h and order k to
(37), the direct method (22) reduces to
Taking widths on both sides of (38) and using (11) and (13), we obtain
w([y
We can interpret w([y j ]) as the size of the bound on the global error and
w([z j ]) as the size of the bound on the local error. We can keep w([z j ]) small
by decreasing the stepsize or increasing the order. However, T [y can be
a large overestimation of the set f Ty g. The reason for this
is that f Ty is not generally a box, but it is enclosed, or
wrapped, by the box T [y j 1 ]; cf. Figure 1. Note that we wish to compute
g, but this is generally infeasible. So we compute T [y
instead. That is, we normally compute on each step products of the form
consequently, we may incur a wrapping on each step, resulting
in unacceptably wide bounds for the solution of (37).
Consider now the point Taylor series (PTS) method given by
If we denote by - the global error of this method at t j , then
where
z j is the local truncation error of this method.
An important observation is that the global error in the PTS method propagates
with T , while the global error in the direct method propagates with
jT j. Denote by (A) the spectral radius of A 2 R nn . It is well-known that
Assuming that T and jT j can be diagonalized, we showed in
[22] that
then the bounds on the global error in these two methods
are almost the same, and the wrapping eect is not a serious di-culty for
the direct method; and
then the global error in the direct method can be much
larger than the global error in the PTS method, and the direct method may
suer signicantly from wrapping eect.
Thus, we can associate the wrapping eect in an ITS method with its global
error being signicantly larger than the global error of the corresponding PTS
method. In particular, if (T ) < 1 and (jT the PTS method is stable,
but the associated ITS method is likely asymptotically unstable in the sense
that lim j!1 kw([y j To reduce the wrapping eect, or improve the
stability of an ITS method, we must nd a more stable scheme for advancing
the solution.
6.2 How Lohner's QR-factorization Method Improves Stability
When applied with a constant stepsize and order to (37), an ITS method
incorporating Lohner's QR-factorization scheme, which together we refer to
as Lohner's method, can be written as
The transformation matrices A j 2 R nn satisfy
where A is orthogonal, and R j is upper triangular.
Using (42{43), we write (40{41) in the following equivalent form:
The interval vector [r j ] in (45) can be interpreted as an enclosure of the global
error (at that is propagated to the next step. Obviously, we are interested
in keeping the overestimation in [r j ] as small as possible.
is not much bigger than
w([z j ]), which we keep su-ciently small (by reducing the stepsize or increasing
the order). Since
we can consider jR as the matrix for propagating the global error in the QR
method. As we discuss below, the nature of the R j matrices ensures that this
method is normally more stable than the direct method.
A key observation here is that (42{43) is the simultaneous iteration (see for
example [32]) for computing the eigenvalues of T . This iteration is closely
related to Francis' QR algorithm [9] for nding the eigenvalues of T . To see
this, let
(j 1): (47)
Note that S
Now observe that the iteration (48{49) is just the unshifted QR algorithm for
nding the eigenvalues of T .
6.2.1 Eigenvalues of Dierent Magnitudes
Let T be nonsingular with eigenvalues f
of distinct mag-
nitudes. Then, from Theorem 3 in [9], under the QR iteration (48{49), the
elements below the principal diagonal of S j tend to zero, the moduli of those
above the diagonal tend to xed values, and the elements on the principal
diagonal tend to the eigenvalues of T .
Using this theorem, we showed in [22] that lim j!1 jR
matrix R with eigenvalues f j
g. Thus,
lim
Using (50), we derived in [22] an upper bound for the global error of Lohner's
method and showed that this bound is not much bigger than the bound for
the global error of the corresponding PTS method (39), for the same stepsize
and order. We also showed on several examples, that, at each t j , the global
error of Lohner's method is not much bigger than the global error of the PTS
method.
6.2.2 Eigenvalues of Equal Magnitude
If T is nonsingular with p eigenvalues of equal magnitude, then the matrices S j ,
dened in (47), tend to a block upper-triangular form with a pp block on its
main diagonal [10]. The eigenvalues of this block tend to these p eigenvalues.
If T is nonsingular with at least one complex conjugate pair of eigenvalues and
at most two eigenvalues of the same magnitude, then S j tends to a block upper-triangular
form with 1 1 and 2 2 blocks on its diagonal. The eigenvalues
of these blocks tend to the eigenvalues of T . Note that S j does not converge
to a xed matrix, but the eigenvalues of its diagonal blocks converge to the
eigenvalues of T .
For this case, we showed in [22] that, as j !1,
if T has a dominant complex conjugate pair of eigenvalues, then
and (R j ) oscillates; and
if T has a unique real eigenvalue of maximal modulus, then we should generally
expect that
Since the matrices jR j j do not approach a xed matrix, as in the case of
eigenvalues of distinct magnitudes, deriving a bound for the global error in
this case is more di-cult. However, we illustrated with four examples in [22]
that the global error in Lohner's method is not normally much bigger than
the corresponding global error of the PTS method.
--R
Introduction to Interval Computations.
FADBAD, a exible C
TADIFF, a exible C
Validating an a priori enclosure using high-order Taylor series
The Solution of Initial Value Problems Using Interval Arithmetic.
The QR transformation: A unitary analogue to the LR transformation
The Algebraic Eigenvalue Problem.
Automatic error analysis for the solution of ordinary di
A computable error bound for systems of ordinary di
Computational Methods in Ordinary Di
Enclosing the solutions of ordinary initial and boundary value problems.
Step size and order control in the veri
Interval Analysis.
Computing Rigorous Bounds on the Solution of an Initial Value Problem for an Ordinary Di
An interval Hermite-Obreschko method for computing rigorous bounds on the solution of an initial value problem for an ordinary dierential equation
A new perspective on the wrapping e
Validated solutions of initial value problems for ordinary di
Global, rigorous and realistic bounds for the solution of dissipative di
A First Course in Numerical Analysis.
On a class of enclosure methods for initial value problems.
On the existence and the veri
A heuristic to reduce the wrapping e
On higher order stable implicit methods for solving parabolic di
On the integration of sti
Understanding the QR algorithm.
--TR
The algebraic eigenvalue problem
Rigorously computed orbits of dynamical systems without the wrapping effect
Computing rigorous bounds on the solution of an initial value problem for an ordinary differential equation
--CTR
Hidde De Jong, Qualitative simulation and related approaches for the analysis of dynamic systems, The Knowledge Engineering Review, v.19 n.2, p.93-132, June 2004 | wrapping effect;interval methods;validated methods;QR algorithm;ordinary differential equations;simultaneous iteration;taylor series;initial value problems;stability |
635471 | Differential algebraic systems anew. | It is proposed to figure out the leading term in differential algebraic systems more precisely. Low index linear systems with those properly stated leading terms are considered in detail. In particular, it is asked whether a numerical integration method applied to the original system reaches the inherent regular ODE without conservation, i.e., whether the discretization and the decoupling commute in some sense. In general one cannot expect this commutativity so that additional difficulties like strong stepsize restrictions may arise. Moreover, abstract differential algebraic equations in infinite-dimensional Hilbert spaces are introduced, and the index notion is generalized to those equations. In particular, partial differential algebraic equations are considered in this abstract formulation. | Introduction
When dealing with standard linear time-varying coe-cient DAEs
and their adjoint equations
one is confronted with a kind of unsightly dissymmetry. These equations are
of dierent type. In [3], more precise formulations
E(PE
Humboldt University, Institute of Mathematics, D-10099 Berlin, Germany
([email protected])
are used, where PE denotes a projector function along ker E, say
These formulations show a little bit more symmetry. However, (1.4) is treatet
in [3] via an auxiliary enlarged system that has a correspondig projector
inside the derivative like (1.3). This formal uncompleteness makes us to
treat general equations of the form ([2])
where A; D and B are continuous matrix functions, and A and D are well
matched so that D precisely gures out which derivatives are actually in-
volved. Neither D nor A have to be projectors. The adjoint equation
now ts nicely into this form.
There are remarkable benets of this new
The inherent regular dierential equation of a DAE (1.5) is now uniquely determined
(cf. [2] and Section 2). Nice generalizations of fundamental classical
results (like the relation Y fundamental matrices) are obtained,
but also further symmetries concerning the characteristic subspaces and the
index of (1.5) and (1.6), respectively ([2]). Further, the boundary value
problem resulting from linear-quadratic control problems looks now nice and
transparent (cf. Section 5).
The nonlinear counterparts of (1.5)
and
should be very welcome in applications, e. g. in circuit simulation. In [13],
[14], those nonlinear equations are studied as well as numerical integration
methods. To be more transparent we do not touch at all the nonlinear cases
in the present paper.
On the one hand, the new approach (1.5) allows to maintain all theoretical
and numerical results concerning (1.1) via (1.3), but also those via the
formulation
where RE is a projector function along im E.
On the other hand, a generalization to abstract dierential algebraic equations
in innite dimensional spaces becomes possible (cf. Sections 4, 5). In
particular, this seems to be a useful tool for treating so-called partial dier-
ential algebraic equations.
A really surprising benet of the new approach concerns numerical integration
methods. At this point it should be mentioned that, e. g. in the
early paper [18], the formulation (1.5) was used with the seemingly technical
condition on im D(t) to be time-invariant. Now we understand that this
condition is exactly the one allowing the DAE-decoupling into its essential
parts and the discretization of the DAE to commute in the index-1 case (cf.
[13]). In the index-2 case, if im D(t) is decomposed into certain two further
constant subspaces, the decoupling and the discretization also commute in
some sense. Hence, neither phenomena converting e. g. the implicit Euler
method within the inherent regular ODE into the explicit one ([11], [10])
nor additional stepsize restrictions may arise. Since this is primarily a property
of the DAE itself, we call those equations numerically well formulated
(cf. Section 3, [13], [14]). Obviously, now one should try to have numerically
well formulated equations at the very beginning. If this is not the case,
refactorizations
A ~
D (or factorizations leading to
numerically well formulated versions might be useful (cf. Examples 3.1, 3.2,
3.3 below). In particular, the Euler backward method applied to the famous
index-2 example given in [20] is well-known to get into great di-culties and
to fail. However, in a slight modication (Example 3.1) it works best, and
we understand why this happens.
By the way, the results on the contractivity of standard methods applied to
given (e. g. in [8], [10]) for the case of a constant nullspace ker E, and
those of modied methods (e. g. [12]) for the case of a constant im E are
now completely clear.
This paper ist organized as follows.
In Section 2 the basic decoupling is described in the nessesary details. With
this, we follow the lines of [2], but now the index-1 and index-2 cases are
put together such that index-1 appers to be a special case. Furthermore,
the matrix functions A and D may be rectangular ones now, i. e., not necessarily
quadratic. In Section 3 we deal with the BDF as a prototype of a
discretization (the same can be done, with some more eort, with Runge-Kutta
methods, cf. [13], [14]), and show the advatages of numerically well
formulated DAEs. A rst attempt to formulate and to apply abstract differential
algebraic equations is presented in Section 4. A linear-quadratic
control problem for a DAE (in abstract formulation) is proved to be solvable
in Section 5. Thereby, a Hamiltonian property turns out to be a further
benet of the numerically well formulated case.
Linear DAEs with properly stated leading
Consider equations
with matrix coe-cients
The coe-cients A and D that determine the leading term of equation (2.1)
are assumed to be well matched in the following sense (cf.[2]). Condition C1
for the case that
Denition 2.1 The ordered pair of continuous matrix functions A and D is
said to be well matched if
im D(t)ker
and these subspaces are spanned by continuously dierentiable bases.
The leading term is properly stated if A and D are well matched.
Obviously, if A and D are well matched, there is a uniquely determined
projector function R(t) 2 L(R n );
Remark 2.1 The decomposition (2.2) implies that, for all t 2 I,
and vice versa. Roughly speaking, D plays the role of an incidence matrix.
It gures out the really involved derivatives of the unknown function.
Remark 2.2 Starting with a standard form equation
E(t)x 0 (t)+F
we may use any factorization matched A and D,
to obtain
In particular, the factorizations can be used if PE (t)
is a projector along the nullspace of E(t), but RE (t) denotes a projector onto
the image of E(t).
Naturally, a solution of equation (2.1) should be a function x 2
that has a continuously dierentiable product Dx, and satises the equation
at all t 2 I. Let
D
denote the respective function space.
Next we introduce certain subspaces and matrices to be used throughout this
paper:
Further, D(t) denotes the re
exive generalized inverse of D(t) with
Note that D(t) is uniquely determined if P 0 (t) is given and vice versa.
Denition 2.2 (cf. [2]): The DAE (2.1) with properly formulated leading
term and nontrivial N 0 (t) has index 2 f1; 2g if N j (t) \S j (t) has constant
dimension j+1 on I and j > 0 for j < ,
Remark 2.3 In terms of the matrices G j (t), the index case is characterized
by
rank G j+1 (t) =: r
Namely, using the auxiliary matrix
and the re
exive generalized inverse G j determined by
but I G remains nonsingular.
Remark 2.4 In an analogous way, considering a somehow more complex
contruction of the matrices B j we could continue the chain (2.6) for i > 1
dene also an index higher than two. However, we dispense with that
here in favour of more transparency, and since in the sections below we fully
focus on the lower index case.
Turn, for a moment, to the well understood case of Hessenberg size-two DAEs
Letting
I
I
we may rewrite (2.9) as a DAE (2.1) with properly stated leading term. Then
we have
I
Furthermore
is the projection onto im B 12 (t) along ker B 21 (t). In our
context,
are projectors that realize the decomposition
(we have ker here). The second equation in (2.9) gives simply
while the rst one leads to
and
In order to extract a regular explicit ODE with respect to the unknown
component one has to dierentiate
I H and to replace (I H)x 0
Then as an inherent
regular ODE, the equation
results. The solutions of (2.9) are given by the formula
I
Note that there is no need to dierentiate the second equation of (2.9) or
any part of the coe-cients { except for the projection H and the term
q. Thus we obtain solvability for all
supposed H belongs to the class C 1 .
In the following, we will derive an anlogous result for general equations (2.1),
where H and I H correspond to DQ 1 D and DP 1 D .
Lemma 2.1 Given an index-2 DAE (2.1).
denote the projector onto N 1 (t) along S 1
Then, for all t 2 I, the decomposition
holds true, and
are the uniquely determined projectors that realize this decomposition.
If, additionally, D(t)S 1 (t) and D(t)N 1 (t) are spanned by continuously dier-
entiable on I functions, then
Proof.
This assertion can be veried in the same way as [2], Lemma 2.5 and Lemma
2.6 were proved for the case of
In contrast to the index-1 case, im D is once more smoothly decomposed into
the further subspaces DS 1 and DN 1 for index-2 DAEs. Let us mention, that
if we think of index-3, we have to decompose DS 1 again and so on.
Formally, we put the index-1 and index-2 cases together for brevity, as we
have trivially
for index-1 DAEs. Hence, G 2 (t) is nonsingular on I for index-1 and index-2
DAEs, i. e.,
In the following, Q 1 (t) always denotes the projector onto N 1 (t) along S 1 (t).
Recall from [8], Appendix A the respresentation
Relation (2.17) leads to the properties
Further, due to the chain construction we compute
Consequently, scaling (2.1) by G 1
2 leads to
Multiplying we decouple this equation into
the following three ones:
Now, given any solution x
D
By this, we nd the solution representation
where the inherent regular ODE
and K := I Q
is a nonsingular matrix
function.
Clearly, this generalizes the well-known representation described above for
Hessenberg size-two forms (2.9). In the index-1 case (2.23), (2.24) simplify
to
Of course, the regular explicit ODE (2.24) can be formed without assuming
the existence of a solution of the DAE just by giving the coe-cients A; D; B
and the right-hand side q.
Denition 2.3 Given a DAE (2.1) of index 2 f1; 2g. The explicit ODE
(2.24) is said to be the inherent regular ODE of the DAE.
Lemma 2.2 Given an index- DAE (2.1), 2 f1; 2g. Let DP 1
be continuously dierentiable.
(i) Then, the subspaces D(t)S 1 (t) and D(t)N 1 (t) as well as the inherent
regular ODE are uniquely determined by the problem data.
is a time-varying invariant subspace of the inherent regular
ODE, i. e., if a solution belongs to this subspace at a certain point, it
runs within this subspace all the time.
(iii) If D(t)S 1 (t) and D(t)N 1 (t) do not vary with time t, then, solving the
IVP for (2.24) with the initial condition u(t yields the
same solution as solving this IVP for
Proof.
(i) (cf. [2]) When constructing the ODE (2.24) the only arbitrariness is the
choice of the projector P 0 and the corresponding D . So we start with
two dierent P 0 and ~
and the corresponding D and ~
D . Recall
that then DD
~
~
~
~
~
~
~
~
Consequently, since the projectors DP 1 D ; DQ 1 D do not depend on
the choice of P 0 , their images do not so, either.
Further, due to
~
~
~
~
I; we have
~
~
G 1= D ~
~
~
~
~
~
(ii) Given any solution ~
some
into (2.24) and multiply the resulting identity by
hence, for ~
vanishes identically as the solution of a homogeneous regular initial
value problem.
(iii) (cf. [13]) If D(t)S 1 (t) and D(t)N 1 (t) are constant, we may use constant
projectors subspaces. Then it holds that
In the consequence the term (DP 1 D
disappears.
In comparison with dierent notions used in the standard DAE theory (un-
derlying ODE, essentially underlying ODE, inherent regular ODE), where
certain transforms resp. projectors are not completely determined by the
problem data, the decoupling described above is fully given by the problem.
This is a real advantage. If the nullspace of A(t) is trivial, i. e., im D(t)
has dimension n as it is the case for the Hessenberg size-two DAE (2.9), we
can also speak of an inherent state space system (in minimal coordinates),
supposed D(t)S 1 (t) does not vary with t. Then, the inherent regular ODE
(2.24) is essentially nothing else but an explicit ODE located in the constant
state space DS 1 . However, in the index-2 case D(t)S 1 (t) and D(t)N 1 (t) (cf.
H(t) in (2.9)) vary with time in general.
Theorem 2.3 Given a DAE (2.1) with index ; 2 f1; 2g,
(i) For any
equation (2.1) is solvable on C 1
D
(ii) The initial value problem for (2.1) with the initial condition
D
(iii) Supposed I is compact, the inequality
becomes true, i. e., the DAE has perturbation index .
(iv) The subspace S ind (t) := K(t)D(t) D(t)S 1
the geometric solution space of the homogeneous equation (2.1) with
ind (t) is lled by solutions and all solutions run within
this subspace S ind
Proof. (cf.
(i) is a simple consequence of (ii).
(ii) Solve the regular ODE (2.24) with the initial condition
Using the solution u
we construct the function x 2 according to formula (2.23).
Obviously,
2 q is continuously
dierentiable, thus x 2 C 1
D we check that x satises (2.1)
indeed.
The homogeneous IVP has only the trivial
solution.
(iii) and (iv) are due to the construction.
Remark 2.5 One could relax the constant rank conditions in the Denitions
2.1 and 2.2 to characterize a class of DAEs (2.1) that have index ; 2,
but undergo certain nondangerous index changes. If the interval I consists
of a number of subintervals where all constant rank conditions are satised,
we realize the decoupling on all these subintervals simulaneously and ask then
for a continuously dierentiable extension of DP 1 D on the whole interval.
If it exists, we may take advantage of (2.23), (2.24) again. However, we have
to consider that some terms may have discontinuities now.
Example 2.1 The system x 0
has the solution x
independently whether the coe-cient function 2 C(I; R)
vanishes. If we put this system in the form (2.1), we choose
further, if (t) does not vanish,
otherwise
Then, on subintervals with (t) 6= 0, we have
, i. e., the index is two there, further
and points where (t) is zero we have G I. Trivially the
problem has index 1 on those intervals. Then G
In both cases we have DP 1
Remark 2.6 In [2] we aimed at a theory to treat both an original DAE and
its adjoint equation in the same way . The adjoint equation of
is now
Both equations have properly stated leading terms at the same time. More-
over, they have index ; 2 f1; 2g, simultaneously. Nice relations between
the fundamental solution matrices are shown in [2].
Remark 2.7 The index notion given by Denition 2.2 resp. Remark 2.3 is
approved for creating a practical index monitor ([6]).
3 Numerical integration methods and
numerically well formulated DAEs
Given a DAE (2.1) with a properly formulated leading term and having index
2g. The natural modication of the k-step BDF applied to (2.1)
is
to provide an approximation x n to x(t n ) at the current step t
Let us use shorter denotations A n := A(t n ) and so on for all given coe-cient
functions and resulting projectors.
Denote the numerical derivative by
we proceed in the same way as we did, in Section 2, with the DAE (2.1)
itself, i. e. we scale by G 1
2n and then use the decoupling projectors. The
resulting system analogous to system (2.20)-(2.22) is, with
Denoting
we derive
nh
(D
A nj D n j x
where A nj :=
further
sjh)ds , so that
Insert (3.7) into (3.4) to obtain the recursion formula for
A nj
Then, by (3.5), (3.6), (3.8), we compute the representation of x n in the form
with
~
On the other hand, if we decoupled the DAE (2.1) rst, applied the same
BDF to the inherent regular ODE (2.24) and then used formula (2.23) to
compute x n , we would obtain
Comparing (3.11), (3.12) and (3.9), (3.10) we observe that, except for the
terms
these formulas coincide, supposed the subspaces D(t)S 1 (t); D(t)N 1 (t) do not
vary with t (cf. Lemma 2.2 above). Namely, in (3.11) and (3.12), we have
then
but in (3.9), (3.10) we use u
and, analogously, A nj
Denition 3.1 We will say that a discretization method and the DAE-decoupling
commute, if for the approximation x n obtained by applying the method to the
original DAE, and the further approximation ^
obtained by applying the
method to the inherent regular ODE and then using formula (2.23), it holds
that
and [DQ 1 G 1
n denotes a numerical approximation of (DQ 1 G 1
Let us stress that in the index-1 case, commutativity means simply x
because of Q
Theorem 3.1 Given an index- DAE (2.1), 2 f1; 2g; which has constant
characteristic subspaces DS 1 and DN 1 . Then the BDF-discretization and the
decoupling commute.
Remark 3.1 Recall once more that one has for the index-1 case.
Trivially, DS 1 and DN 1 are constant i im
being constant is a necessary condition
for DS 1 and DN 1 to be constant.
Remark 3.2 Here, we have chosen the BDF only for brevity. Of course,
similar results can be proved for Runge-Kutta methods. Also, nonlinear DAEs
may be considered ([14], [15]).
Remark 3.3 In particular, Theorem 3.1 generalizes the respective results
obtained for DAEs with constant leading nullspace (cf. (1.3) with P 0
e. g. in [8] for in [10] for 2. On the other hand, Theorem
3.1 covers the situation studied in [12], i. e., an equation (1.1) rewritten as
time-invariant.
Remark 3.4 It is a remarkable benet of commutativity that we can now
prove nice assertions on contractive or dissipative
ows and the respective
discretized versions ([14]). Having modied the respective notions (contrac-
tivity and dissipativity inequalities, adsorbing sets etc.) in such a way that
the standard ODE notions apply to the inherent regular ODE on its invariant
subspace DS 1 , we may use the well-known techniques and results approved in
the regular ODE case.
In particular ([14]), a stiy accurate and algebraically stable Runge-Kutta
method applied to a contractive index-1 DAE yields
for each two sequences of approximations starting with consistent x
Thereby,
is a bound of the canonical version of D corresponding to the
(canonical) projector onto S 0 along N 0 .
It is also shown in [14] that the Euler backward method re
ects the dissipativity
behaviour properly without any stepsize restriction.
Example 3.1 Rewrite the equation (cf. [20])
E(t)x 0 (t)+F
with
as
or as
where
Both reformulated versions (3.15) and (3.16) have properly formulated leading
Now, we observe that we have index-2 DAEs with constant subspaces as supposed
in Theorem 3.1. Thus, for both versions (3.15) and (3.16), the BDF
and the decoupling commute.
In both cases, the Euler backward yields simply
Recall that the Euler backward directly applied to (3.14) gets into great diculties
for jj > j1 g. [4], [20]).
Example 3.2 The standard form DAE E(t)x
has index-1 on I = R for arbitrary values 6= 0; - 6= 1. The solution is
For - < , one would expect the backward Euler method to generate a sequence
fx n;2 g n such that x n;2 ! 0(n !1) without any stepsize restriction.
However, the formula
reads in detail
While for the constant coe-cient case works ne, for - 6= 0, the
numerical solution may explode close to h 1. To realize j1
j1 hj one has to accept strong extra stepsize restriction.
On the other hand, if we apply the Euler method to the reformulation
we arrive at
and things work well. No stepsize restrictions are caused by stability.
Example 3.3 Consider the seemingly harmless Hessenberg index-2 DAE
with
The leading term may be rewritten in a proper form by choosing (cf. (2.9))
~
A ~
With ~
The subspaces
but also
~
~
move with time. We refer to [10] for the sobering eect of the numerical
tests. Recall that this problem was introduced in [10] to demonstrate that just
a Hessenberg form DAE may get into dieculties with extra stepsize restrictions
Observe, on the other hand, that there are more possibilities in factorizing
E. If we nd a further well-matched factorization AD such that
appear to be time-invariant, we are done since Theorem 3.1
applies. Such a much more comfortable factorization is given by ~
D,
The reformulated DAE is
D, and the
subspaces
are constant such that Theorem 3.1 applies in fact.
As we could see above, the problem whether discretization and decoupling
commute depends primarily on how the DAE is formulated.
Denition 3.2 A DAE (2.1) with index 1 is said to be numerically well
formulated if im D(t) is time-invariant (cf. [13]). A DAE (2.1) with index-
2 is said to be numerically well formulated if D(t)S 1 (t) and D(t)N 1 ) are
constant.
This new possibility to rearrange subspaces for better numerical properties
is a surprise. One should think further on how to exploit this idea best.
Fortunately, e. g. in circuit simulation, the relevant subspaces could be
shown to be constant for a large class of problems ([7], [5]), hence there is no
need for reformulations.
Special questions concerning necessary reformulations will be discussed in
[14],
Abstract
DAEs
Nowadays, e. g. in circuit simulation and simulation of multibody dynam-
ics, there is a remarkable interest in complex systems that consist of coupled
systems of DAEs and PDEs (cf. [21], [9]). There are also certain proposals
for treating so-called PDAEs (= partial dierential algebraic equations, e. g.
[17]). Thereby, one of the questions to be considered is how to formulate
initial and boundary conditions in an appropriate way.
In the following we try to show the usefulness of the decoupling procedure
given in Section 2 for the nite dimensional case, now in an abstract modi-
cation. Being able to understand those abstract DAEs, a more systematical
constructive and numerical treatment can be started on.
In this section we deal with abstract DAEs
are linear operators
acting in the real Hilbert spaces X; Y; Z.
For all t 2 I, let A(t) and D(t) be bounded and normally solvable. More-
over, let A(:) and D(:) depend continuously (in the norm sense) on t. As
bounded maps, A(t) and D(t) have nullspaces ker A(t) and ker D(t), respec-
tively, which are closed linear manifolds, i. e., subspaces. Due to the normal
solvability, im A(t) and im D(t) are subspaces (e. g. [1]), too.
For all t 2 I, let the operator B(t) be dened on a dense subset DB X,
and let B(:)x 2 C(I; Y ) for all x 2 DB .
A continuous path x 2 C(I; X) with is called a solution
of (4.1) if Dx 2 C 1 (I; Z) and equation (4.1) is satised pointwise.
Denition 4.1 The leading term of (4.1) is properly stated if the operators
A(t) and D(t); t 2 I, are well matched in the following sense:
ker A(t) im
(ii) The projector R(t) 2 L b (Z) that realizes this decomposition of Z (i. e.,
depends continuously
dierentiably on t.
Remark 4.1 Since R(t) depends smoothly on t, the subspaces im D(t) are
isomorphic for dierent values of t. For the same reason, the nullspaces
ker A(t) are also isomorphic.
Let L b (X) denote the linear space of linear bounded maps on X etc. For
bounded maps, we alway assume their denition region to be the respective
whole space.
Let clM denote the closure of the set M in the respective space.
Next we generalize the matrix- and subspace-chain given in (2.6) by introducing
the following further linear maps, linear manifolds and subspaces for
cl ker G 0 (t),
cl im G 0 (t),
cl ker G 1 (t),
cl im G 1 (t),
By construction, the operators G 1 (t) and G 2 (t) are at least densely dened
in X. In our context, operator products (in particular those with projectors)
are often dened on a larger region by trivial reasons. We will always use
the maximal trivial extensions.
denote the re
exive generalized inverse of D(t) such that
Denition 4.2 Equation (4.1) with a properly stated leading term is said to
be
(i) an abstract index-1 DAE if, for all t 2 I; dim(im W 0 (t)) > 0 and
injective and densely solvable,
(ii) an abstract index-2 DAE if, for all t 2 I;
depends continuously on t, and
injective and densely solvable.
If also Y is a bounded map that is continuous with respect to
t in norm sense, then the assertions in Section 2, in particular Theorem 2.3,
can be modied to hold true for the abstract DAE in a straightforward way.
The more challenging problem is an only densely dened B(t). Hence, we
are going to study three dierent cases of this type.
Case 1: A coupled system of a PDE and Fredholm integral equations (on a
special application of this type I was kindly informed by Hermann Brunner).
Given a linear Fredholm integral operator
a linear dierential operator
4 with c 0, and linear bounded coupling operators
s . The system to be considered is
Using the corresponding matrix representations for A; D; B; with
s L
2 we rewrite (4.2) in the form
(4.1). Namely, we have
K L
is dened on
s .
Clearly, it holds that N
I
I
are dened on X (as trivial extensions of bounded maps).
G 1 is a bijection such that this abstract DAE has index 1.
we nd DG 1
dened on _
Each solution of the DAE is given by the expression
where u(t) is a solution of the abstract regular dierential equation
Obviously, one has to state an appropriate initial condition for (4.3), i. e.,
Case 2: A special linear constant coe-cient PDAE discussed in [17].
Consider the PDAE4 1
Suppose 3.
How should we formulate boundary and initial conditions?
Rewrite (4.4) in abstract form with x(t) := w(; t).
Choose
L
L
L
and use
again matrix representaions for our coe-cients, i. e.,
0 a
is the Laplacian. Let us start with
The operator
is dened on X and bounded.
Obviously, ker G is a nontrivial
closed subspaces and im G
L
0 is closed.
is dened on
is a subspace, and obviously
0 a
5 represents a projector onto N 1 along S 1 , thus
. The map
is dened on L
C
L
8 further
L
C
densely solvable.
The injectivity of G 2 is immediately checked, therefore, this DAE has index
2.
The inherent regular dierential equation is now
with
and
dened on L
dened on C
L
i. e., (4.6) is in fact nothing else but
Now it becomes clear that, for (4.6), the initial condition
as well as boundary conditions should be
given. We take homogeneous Dirichlet conditions and put them into the
denition region of B, i. e., we restart our procedure with
instead of (4.5) or, in spite of more general solvability, with
Using (4.8), G 2 is dened on L
L
6 but G 1
2 on
L
Consequently, admissible right-hand sides are q(t)
with
1(Computing
0 a
we nd the solution representation (cf. (2.23)
A q(t) +@ 0a
c3
q(t) a
while x 1 (t) solves (4.7). No further initial or boundary conditions should be
given. With (4.8) we obtain the unique solvability of the IVP
Case 3: A PDAE and a DAE coupled by a restriction operator.
Consider the system
~
uj
@
~
Assume the linear restriction map
R m to be bounded
and to depend continuously on t.
Rewrite the system (4.9), (4.10) as an abstract DAE for
~ u(; t)
~
and choose
A :=
A
A ~
R
Restrik ~
where DB := _
is dened on X,
im ~
Supposing that the operator Restrik(t) maps into im ~
holds that
~
is the projector onto N 1 along S 1 , G
In the general case the projector Q 1 onto N 1 along S 1 is more di-cult to
construct. Obviously, we have N 1 \
G 2 is a bijection if ~
G 2 is so.
Consequently, the coupled system (4.9) (4.10) interpreted as an abstract DAE
has the same index as the DAE (4.10).
5 Linear-quadratic control problems
Now, the quadratic cost functional
is to be minimized on solutions of the DAE
which satisfy the initial condition
real Hilbert spaces,
Let all these operators depend continuously in the norm sense on t.
Assume further:
(ii) V and the maps
are positive semidenite.
(iii) A(t) and D(t) are normally solvable and well matched for t 2 I.
Admissible controls are those functions u 2 C(I; U) for which a solution
D (I; X) of the IVP (5.2), (5.3) exists.
Consider the boundary value problem
The system (5.4) is a DAE with a properly statet leading term. D(t) and
A(t) are normally solvable and well matched at the same time as A(t) and
are so.
Theorem 5.1 If the triple x 2 C 1
A (I; Y ); u 2 C(I; U)
solves the BVP (5.4), (5.5), then u is an optimal control for the problem
(5.1)-(5.3).
Proof. By straightforward calculations we prove that the expression
vanishes. Then the assertion follows immediately from
R x(t) x (t)
u(t) u (t)
x(t) x (t)
u(t) u (t)
dt
In Theorem 5.1, no assumption on the index of the DAE (5.2) is made. How-
ever, for obtaining a theoretically and practically solvable BVP (5.4), (5.5),
additional conditions ensuring the index-1 property of (5.4) should be satised
denote the orthoprojectors onto ker AD and along im AD, respectively
(cf. Section 4).
Denote
Lemma 5.2 The DAE (5.4) has index 1 i, for t 2 I,
Proof. Put ~
The respective maps for (5.4) rewritten as ~
are
~
~
~
~
Y is a bijection i
acts bijectively on
ker D(t) ker A(t) U onto ker A(t) ker D(t) U .
Further, ~
is bijective at the same time as G 1 (t) is so (cf. Remark 2.3).
Remark 5.1 In [16], a linear-quadratic control problem of a similar form is
studied. Instead of the DAE (5.2) the equation
is considered. Because of the somehow incomplete leading term, the resulting
system corresponding to (5.4) looks less ne and transparent.
Remark 5.2 Since the operator F has the same structure as its counterpart
in [16], all su-cient conditions for bijectivity proved in [16] hold true also
here.
Lemma 5.3 Let, for t 2 I,
be bijective. Then the inherent regular dierential equation of (5.4) is of the
Dx
A
R
Dx
A
are positive semidenite.
Proof.
This assertion can be proved following the lines of [16], Theorem 1.
Obviously, (5.8) is a non-negative Hamiltonian system
RR but this is exactly the case if (5.4) is numerically well
formulated. For Hamiltonian systems, we know the respective BVPs to be
solvable.
Theorem 5.4 Let F(t) in (5.7) be bijective, R 0
is at least one optimal control for the problem (5.1), (5.2), (5.3).
If Z is nite dimensional, there is exactly one optimal control of (5.1), (5.2),
(5.3).
Proof.
It remains to show uniqueness in case of nite dimensional Z, but this can
be done in the same way as [16], Theorem 4, is proved.
--R
Theorie der linearen Opera- toren im Hilbertraum
Numerical solution of initial value problems in di
Indexes and special discretization methods for linear partial di
Simulation gekoppelter Systeme von partiellen und di
--TR
Difference methods for the numerical solution of time-varying singular systems of differential equation
Stability of computational methods for constrained dynamics systems
On Asymptotics in Case of Linear Index-2 Differential-Algebraic Equations
Runge-Kutta methods for DAEs. A new approach
A Differentiation Index for Partial Differential-Algebraic Equations
Analyzing the stability behaviour of solutions and their approximations in case of index-2 differential-algebraic systems
--CTR
G. A. Kurina, Linear-Quadratic Discrete Optimal Control Problems for Descriptor Systems in Hilbert Space, Journal of Dynamical and Control Systems, v.10 n.3, p.365-375, July 2004
I. Higueras , R. Mrz , C. Tischendorf, Stability preserving integration of index-1 DAEs, Applied Numerical Mathematics, v.45 n.2-3, p.175-200, May
I. Higueras , R. Mrz , C. Tischendorf, Stability preserving integration of index-2 DAEs, Applied Numerical Mathematics, v.45 n.2-3, p.201-229, May | numerical integration methods;differential algebraic equations;abstract differential algebraic equations;partial differential algebraic equations |
635794 | Non-nested multi-level solvers for finite element discretisations of mixed problems. | We consider a general framework for analysing the convergence of multi-grid solvers applied to finite element discretisations of mixed problems, both of conforming and nonconforming type. As a basic new feature, our approach allows to use different finite element discretisations on each level of the multi-grid hierarchy. Thus, in our multi-level approach, accurate higher order finite element discretisations can be combined with fast multi-level solvers based on lower order (nonconforming) finite element discretisations. This leads to the design of efficient multi-level solvers for higher order finite element discretisations. | Introduction
Multi-grid methods are among the most ecient and most popular solvers for nite element
discretisations of elliptic partial dierential equations. Their convergence theory in the case
of symmetric operators and nested conforming nite element methods is well-established;
see for example the books [7, 17, 30] and the bibliographies therein. Multi-grid methods
for nonconforming nite element approximations have been also studied in a number of
papers, e.g. see [3, 4, 6, 8, 9, 10, 11, 12, 13, 19, 31]. In this case, the nite element space
of a coarser level is in general not a subspace of the nite element space of a ner level;
Institut fur Analysis und Numerik, Otto-von-Guericke-Universitat Magdeburg, PF 4120, D-39016
Magdeburg, Germany
y Institute of Numerical Mathematics, Faculty of Mathematics and Physics, Charles University,
the resulting multi-grid method is called non-nested. The general framework of analysing
the two-level convergence of non-nested multi-grid methods, developed in [4] for elliptic
problems, will be the starting point for the methods studied herein. Multi-grid methods
for mixed problems, arising in the discretisation of the Stokes equations, are analysed in
[3, 5, 9, 19, 27, 31, 34]. The crucial point in the investigation of multi-grid methods for
mixed problems is the denition and analysis of the smoother. The currently best understood
type of smoothers, here called Braess-Sarazin type smoother, is a class of symmetric
incomplete Uzawa iteration proposed in [1] and analysed on its smoothing properties in
[5, 27, 34]. The Braess-Sarazin type smoother has a convergence rate of O(1=m) with
respect to the number of smoothing steps. By far the most analytical studies are applied
to standard multi-grid methods where on each multi-grid level the same discretisation of
the partial dierential equation is used.
In this paper, we investigate multi-level solvers for nite element discretisations of
mixed problems which allow dierent discretisations, in particular the use of dierent -
nite element spaces, on each level of the multi-grid hierarchy. The motivation for using this
type of multi-level solvers comes from general experiences that standard multi-level solvers
are very ecient for low order discretisations. But higher order discretisations might lead
to an overwhelming gain of accuracy of the computed solution so that their use should
be preferred for this reason. The multi-level solvers investigated in this paper allow an
accurate discretisation on the nest level and low order discretisations on all coarser levels.
In this way, an accurate solution can be obtained for whose computation the eciency of
multi-level solvers for low order discretisations is exploited. The crucial point in the construction
of such multi-level methods is the transfer operator between the nite element
spaces dened on dierent levels. We show that rather simple L 2 -stable prolongations
guarantee already the convergence of the two-level method for a suciently large number
of smoothing steps with a Braess-Sarazin type smoother. Our approach allows to handle
conforming and nonconforming nite element spaces in a general framework. As a concrete
application of the general theory developed in this paper, we have in mind in particular the
Stokes and Navier-Stokes equations. However, it is clear that the same ideas can be also
applied to selfadjoint elliptic equations. The eciency of multi-grid solvers for lowest order
non-conforming discretisations of these equations has been demonstrated, e.g., in [20, 32]
and the gain of accuracy of higher order discretisations in a benchmark problem in [18].
Dierent discretisations on the nest and on coarser grids have been already used in the
convergence theory of nonconforming nite element discretisations of the Possion equation
in [12, 13]. However, the motive of the approach in [12, 13] is completely dierent to ours.
The replacement of the P 1 -nonconforming coarse grid correction by a conforming P 1 coarse
grid correction enables the authors to apply the well-developed theory of multi-grid solvers
for conforming discretisations on the coarser levels.
The plan of the paper is as follows. In Section 2, we investigate the convergence properties
of a multi-level method for solving mixed nite element discretisations in an abstract
way. First, we introduce the variational and the discrete mixed problem and we describe the
matrix representation. Then, based on abstract mappings between nite element spaces,
the prolongation and restriction operators are dened. The smoothing property of the
basic iteration proposed in [5] is shown for symmetric positive denite spectral equivalent
pre-conditioners and without an additional projection step. Together with the approximation
property, the convergence of a two-level method and the W-cycle of a multi-level
method are established. Section 3 is devoted to the construction of a general mapping
between two nite element spaces. We show that this general transfer operator satises
all assumptions which are essential for our theory. As applications, we show in Section 4
how various discretisation concepts for solving the Stokes problem t in our general theory.
Finally, we present numerical results to verify the theoretical predictions on the multi-level
solvers.
Throughout this paper, we denote by C a \universal" constant which is independent
of the mesh size and the level but whose value can dier from place to place.
Multi-level Approach
2.1 Variational Problem
We consider a variational problem of the following form. Let V and Q be two Hilbert
spaces and let V H V 0 be the Gelfand triple. Given the symmetric bilinear form
R and a functional f 2 V 0 , we look for a
solution (u;
We assume:
(Solvability) For all f 2 V 0 , the problem (1), (2) admits a unique solution (u;
One example for this type of problems is the weak formulation of the Stokes problem
in d space dimensions
in
r
in
@
where
is a bounded domain with Lipschitz continuous boundary. Here we set
and a and b are given by
Z
Z
pr v dx: (4)
Note that in this case (H1) is satised [16] since a is V -elliptic and b satises the Babuska-
Brezzi condition, i.e., there is a positive constant such that
sup
2.2 Discretisation
be sequences of (possibly nonconforming) nite element
spaces approximating V and Q, respectively. Instead of the continuous bilinear forms
a and b we use the discrete versions a l :
again assume that the bilinear forms a l are symmetric. For f 2 H the discrete problem
corresponding to (1), (2) reads
Find such that
a l (u l ; v l
where (:; :) denotes the inner product in H.
(Solvability and convergence) We assume that the problem (6), (7) admits a unique
solution and that the error estimate
l kfkH (8)
holds, where h l characterises how ne is the nite element mesh on which V l and Q l
are dened.
Remark 2.1 Typically, the convergence estimate (8) is established using a regularity property
of the problem (1), (2). Such a regularity property usually states that if f 2 H, then
the solution (u; p) of (1), (2) belongs to a 'better' space W R V Q and satises
For example, for the Stokes problem mentioned above, one has
0(d ,
, and the regularity property holds if the boundary
of
is of class C 2
or
is a plane convex polygon. For (u; p) 2 W R one can often prove that the solution
of (6), (7) satises
l
and taking into considerations the estimate in the regularity property, one obtains (8).
2.3 Matrix Representation
be bases of the spaces V l and Q l , respectively, where
I l , J l denote the corresponding index sets. The unique representations
i2I l
dene the nite element isomorphisms between the vector
spaces of coecient vectors u
and
the nite element spaces V l and Q l , respectively. We introduce the nite element matrices
A l and B l having the entries a Now the discrete
problem (6), (7) is equivalent to
l
u l
l
with f that A l is a symmetric matrix. In the vector spaces U l and P l ,
we will use the usual Euclidean norms scaled by suitable factors such that the following
norm equivalences
are satised with a mesh- and level-independent constant C.
2.4 Prolongation and Restriction
Essential ingredients of a multi-level algorithm for mixed problems are the prolongations
and the restrictions
R u
l 1;l
l 1;l
In case of a nested nite element hierarchy V 0 V l 1 V l , the canonical
prolongation P u
l 1;l is obtained by the nite element isomorphisms between U l 1 and V l 1 ,
U l and V l and the embedding V l 1 V l , thus
l 1;l := 1
l l
Similarly, we would have
l 1;l := 1
l l 1
for assumed inclusion property Q 0 Q l 1 Q l is often
but not always satised in applications. For example, this inclusion property is violated
if the spaces Q l are constructed using the nonconforming piecewise linear element. The
corresponding velocity spaces V l , such that the pair (V l ; Q l ) satises the discrete version
of (5), may then be constructed using the bubble enriched nonconforming piecewise linear
element [33] or the modied nonconforming piecewise linear element [22]. As a dierent
example one could also think of using a continuous pressure space Q l on the level l and a
discontinuous space Q l 1 on the level l 1. We emphasise that Q l 1 can be dened on the
same mesh as Q l .
In the general case of non-nested velocity spaces, when V l 1 6 V l , we have to replace
the natural embedding by a suitable mapping l , which results in the following
prolongation and restriction:
l
l u l 1 and R u
l 1
l
It turns out that the convergence analysis requires estimates for u on the sum V l 1 +V l , thus
we will dene l on the (possibly larger) space l with V l 1 +V l l H. In the
case we introduce a mapping giving the following prolongation
and restriction, respectively,
l
l p l 1 and R p
l 1
l
We assume that the following properties of the mapping u hold:
Remark 2.2 In [4], the mapping u has been assumed to be the product
l to allow more
exibility in constructing a suitable u . In many
cases S can be chosen to coincide with V l or U l :
2.5 Smoothing Property
In this section, we will omit the index l indicating the current level and the underline
symbol for noting coecient vectors in U and P. For smoothing the error of an approximate
solution of (10), we take the basic iteration
0, which can be considered as a special case of the symmetric incomplete Uzawa
algorithm proposed by Bank, Welfert and Yserentant in [1]. The smoothing properties of
have been studied in [5] for the special case in [27] for the general case provided
that an additional projection step is performed, and in [34] for a more general setting. For
completeness, we give a proof of the smoothing property without an additional projection
step using partially results of [27].
The matrix D is a pre-conditioner of A such that the linear system (13) is more easily
solvable than (10). Note that we have
implying that after one smoothing step the iterate u j+1 , j 0, is discretely divergence-free,
i.e., Bu Here and in the following, we assume that
(H5) D is symmetric and positive denite and BB T is regular.
The matrix BB T is regular if the bilinear form b l , generating the matrix B satises a
discrete version of the Babuska-Brezzi condition (5). From the regularity of BB T it follows
that S := BD 1 B T is also regular.
The iteration is a so-called u-dominant method since the new iterate (u
on u j but not on p j . Indeed, let (u; p) be the solution of (10). Then, we have
u u j+1
(D A)(u
It is easy to verify that
and hence we particularly obtain for j 0
a projector, we can also write
where M is given by From (14) and (15) it follows for m 3
In the case of D = I, both P and M are symmetric. Choosing max (A) the matrix
I 1 A is a contraction. Moreover, the spectrum of M is contained in [0; 1] and the usual
spectral decomposition argument (see [3]) results in the smoothing property
3: (19)
For proving (19) it is essential that the basis f' chosen in such a way that
holds. The proof of the smoothing property for D 6= I is more tricky.
In [27] an additional pressure update has been introduced to guarantee the smoothing
property. However, as already noticed in [3], a careful inspection shows that this additional
step can be omitted.
Lemma 2.1 Let be chosen such that
for some level- and mesh-independent constants 2 [1; 2) and
> 0. Moreover, let the
basis be chosen such that Then, the basic iteration
satises the smoothing property
Proof. The projector P is similar to the matrix D 1=2 PD 1=2 . Since D 1=2 PD 1=2 is symmetric
it follows that D 1=2 MD 1=2 is also symmetric. In [27], it has been shown (see proof
of Lemma 3.9) that the spectrum of M satises
for some 2 [1; 2). This assumption holds since > max (A)= min (D) 2 . Then, from
the representation
we get by a spectral decomposition argument applied to the symmetric matrix D 1=2 MD 1=2
the existence of a (level independent) constant C
Since
I
1
1
with
the mapping
I
is a contraction for 2 =2. Thus, from (18) and (23), we get for m 3
C
The cases have to be considered separately. Starting with (14), we have
for
A(u
(D A)(u j+1
Taking into consideration that I 1 D 1=2 AD 1=2 is contractive for 2 =2, we obtain
for
kA(u
From (16), we get
and because
is the product of the projector D 1=2 PD 1=2 and the contraction (I 1 D 1=2 AD 1=2 ),
we conclude for all j 0
Using the triangular inequality, we nally obtain from (27) for all j 0
kA(u
which proves (20) for 2. |
2.6 Approximation Property
Let an approximation (~u l ; ~
of the solution of the problem (6),
(7) be given. We can think of (~u l ; ~
l ) as being the result after some smoothing steps, and
consequently assume that
b l (~u l ; q l
Then, the coarse-level correction is dened as the solution of the following problem:
Find
1 such that for all (v l 1
a l 1 (u
The coarse-level correction yields via the transfer operators u , p from Section 2.4 the new
approximation
new
new
Now, the basic idea for proving the approximation property is to construct an auxiliary
(continuous) problem such that (u
l ) are nite element solutions
of the corresponding discrete problems on the spaces V l 1 Q l 1 and V l Q l , respectively.
This idea has been used for scalar elliptic equations already in [6] and has been applied
to more general situations in [4], [19]. We dene the Riesz representation F l 2 l of the
residue by
Then, the auxiliary problem will be
Find (z; w) 2 V Q such that
Indeed, for s 2 V l we have
which means that (u l ~
l ) is a nite element approximation of (z; w) in the space
l Q l . On the other hand, becomes just the right-hand side of (28) if s
i.e.,
l 1 ) is the nite element approximation of (z; w) in the space V l 1 Q l 1 .
Lemma 2.2 Let h l 1 Ch l with a mesh- and level-independent constant C. Then, the
approximation property
l u new
l kH Ch 2
l
l (p l
~p l
holds.
Proof. Applying (H3), (H4), the triangular inequality, and (8), we get
l u new
l
l
l
Cku l ~
l 1 kH
C(ku l ~
It remains to state a relation between the residue F l 2 l and the \algebraic" residue
d l 2 U l given by
d l := A l (u l ~
l (p l
~p l
For this, we rst consider the Riesz representation of the residue in V l , i.e., r l 2 V l dened
by
Since
A l (u l ~
l (p l
~p l
U l
for all v l 2 V l , we have
l r l ; v l ) U l
from which the representation
l r l (33)
follows. Now, using
l
l u F l ) U l
we get from (11) and (H4)
H kd l k U l
l u F l k U l
Ckd l k U l
k u F l k H Ckd l k U l
Dividing by kF l k H , we obtain
which together with (32) yields the statement of the lemma. |
2.7 Multi-level Convergence
We shortly describe the two-level algorithm using m smoothing steps on the level l, l
and the coarse-level correction (28), (29). Let (u 0
l ) be an initial guess for the solution
l ) of (6), (7). We apply m steps of the basic iteration
l ). Now,
the coarse-level correction (28), (29) is performed using
l
as an approximate solution of the discrete problem (6), (7). Finally, the new approximation
is dened as
new
l
new
l
Combining the smoothing and approximation property, we obtain
Theorem 2.1 Under the assumptions of Lemma 2.1 and Lemma 2.2, the two-level method
converges for suciently many smoothing steps with respect to the H- and U l -norm. In
particular, there are level- and mesh-independent constants C and ~
C such that
l u new
l k U l
l k U l
and
l u new
l k H
~
l
Proof. Applying Lemma 2.1 we have
l
l (p l
l
C
l
l k U l
Taking into consideration the norm equivalence (11) and Lemma 2.2, we conclude
l u new
l k U l
Cku l u new
l k H
l kA l (u l ~
l (p l
~p l
l k U l
which proves convergence in the U l -norm for suciently many smoothing steps m. The
convergence in the H-norm follows from the norm equivalence (11). |
Remark 2.3 Note that in our context the notation \two-level" does not necessarily mean
that we consider two levels of mesh renements. Thus, Theorem 2.1 is also applicable
in cases where we have two dierent nite element discretisations on the same mesh (see
Sections 4.2 and 4.3).
Once proven the convergence of the two-level method, the convergence of the W-cycle
multi-level method follows in a standard way (see e.g. [4], [17]). This is also true for a
combination of a nite number of dierent two-level algorithms provided that Theorem 2.1
holds true for each of these two-level methods.
3 A General Transfer Operator
In this section we will describe general nite element spaces V l and we will show how then
a space l and a transfer operator
and the assumptions (H3) and (H4) can be constructed. In this way, the transfer operator
can be applied on a large class of nite elements, including in case of the Stokes problem,
for example, also velocity spaces generated by vector-valued basis functions [16].
We denote by fT l g l0 a family of triangulations of the
domain
Each triangulation
l consists of a nite number of mutually disjoint simply connected open cells K so that
we
have
K2T l
We assume that
(H6) For any l > 0, the triangulation T l is obtained from T l 1 by some \renement", i.e.,
each cell K 2 T l 1 is either a member of T l or it has been rened into child cells
In particular, we allow T l l , which gives us the additional possibility to dene
dierent nite element spaces V l 1 and V l on the same mesh.
We assume that there exists a nite number of reference domains b
Kc M such that,
for any level l 0 and for any cell K 2 T l , there exists
Mg and a one-to-one
mapping FK 2 W 1;1 ( b
We assume that
are independent of K and l. In addition, we suppose
that
and
BK is a ball
again with C independent of K, K 0 and l. The validity of (38)-(40) usually follows from
some shape-regularity assumption on the triangulations. Particularly, for simplicial trian-
gulations, the condition (40) already guarantees a shape-regularity of the cells and implies
(38). Moreover, if the triangulations satisfy usual compatibility assumptions (cf. e.g. [14]),
then (39) also follows. In case of hanging nodes, the dierence in the renement levels of
neighbouring elements is limited by (39).
On each reference cell b
M , we introduce a nite-dimensional space b
. Employing the mappings FK from (37), we introduce,
for any cell K 2 T l , a nite{dimensional space P l (K) H 1 (K) d such that
We denote by 'K;j , functions in P l (K) satisfying
Usually, the coecients aK;jk are zeros or unit vectors in the direction of coordinate axes,
and one takes only one non-vanishing coecient for each j. However, in some cases, also
other choices of the coecients aK;jk may be of use. Particularly, this is the case if the
space P l (K) contains vector-valued basis functions which cannot be obtained by transforming
xed basis functions from the space [ b
K. As an example may serve the
Bernardi/Raugel element (cf. [2]) which contains vector-valued basis functions perpendicular
to element faces.
We introduce linear functionals fNK;i g dimP l (K)
dened on P
l (K), which we will call
local nodal functionals in the following. We assume that these functionals possess the
usual duality property with respect to the basis functions, i.e.,
denotes the Kronecker symbol. Examples of such nodal functionals can be found
in Section 4.
Now, for each level l, we introduce a nite element space V l satisfying
This space is smaller than the piecewise discontinuous space on the right-hand side of (41)
since we assume that there is some connection between functions on neighbouring cells.
This connection can be enforced by choosing pairs of nodal functionals from neighbouring
cells and by requiring that the two functionals from any pair are equal for any function
from the space V l . We denote by f' l;i g i2I l
a basis of the space V l obtained in this way,
where the set I l is an index set whose elements will be called nodes in the following. We
assume that the mentioned pairs of local nodal functionals were chosen in such a way that,
as usual, these basis functions have \small" supports. Precisely, we require that, for any
l , there exists K 2 T l such that
Further, we assume that, on any cell K 2 T l , the basis functions ' l;i coincide with the local
basis functions 'K;j . Thus, denoting by
I l
the set of all nodes which are associated with a cell K, we can introduce a one-to-one
mapping
such that
Using the mappings K we can renumber the local nodal functionals, i.e., we set
Then
Furthermore, we dene for any node j 2 I l
which is the set of all cells K 2 T l which are connected with the node j. Then we can give
a precise characterization of the space V l , namely,
This shows that a natural choice for a global nodal functional is the arithmetic mean of
the local nodal functionals
K2T l;i
NK;i (wj K
Note that we again have the duality relation
which implies that
i2I l
having described the spaces V l , we can turn to the question how the spaces l
and the mappings satisfying (36), (H3) and (H4) can be dened. Following
the ideas developed in [29], where the construction of general transfer operators for nite
element spaces has been investigated, we construct l as a discontinuous nite element
space
l
with a nite dimension. To guarantee (36), we furthermore assume that
(H7) The local function space S l (K) is constructed such that
is the parent cell of a child cell K 2 T l for
We suppose that, for any K 2 T l , the local nodal functionals NK;i are well dened on
l (K), which usually means that the functions in S l (K) are of the same type as those in
l (K) (e.g., continuous). Then we can dene the transfer operator simply by
i2I l
In view of (44) we immediately obtain the validity of (H3). To be able to prove (H4),
we assume that each of the spaces S l (K) can be obtained by transforming functions from
one of the reference cells onto K. Thus, we introduce nite element spaces b
M , with bases f b
and we assume that, for any K 2 T l , there exists
Mg such that
l
where FK is the mapping from (37) and C is independent of K and l. The last assumption
is automatically satised if the local nodal functionals NK;i are dened by means of a nite
number of functionals dened on the reference spaces b
which is often the case.
Lemma 3.1 The operator u dened by (45) is uniformly L 2 stable, i.e., it satises (H4)
with a constant C independent of l and with k k
Proof. Consider any w 2 l and any K 2 T l and let FK and b
S i be the mapping and the
space from (46), respectively. Then
for some real numbers w j and it follows from (47), from the equivalence of norms on
nite-dimensional spaces and from (38) that, for any m 2 I l (K),
jNK;m (wj K )j C
Thus, we see that, for any i 2 I l , we have
jN l;i (w)j C
K2T l;i
For any cell e
l , we derive using (38), (48), (42) and (39) that
k u wk 0; e
i2I l ( e
e
i2I l ( e
jN l;i (w)j e
where we denoted by
i2I l ( e
K2T l;i
the vicinity of any cell e
l . In view of (39), (40) and (42), the number of cells in ( e
is bounded independently of e
K and l and hence we obtain
which expresses a local L 2 stability of the operator u . Again, according to (39), (40) and
(42), the number of the sets ( e
K) which contain any xed cell K 2 T l is bounded independently
of l and hence the local L 2 stability of u immediately implies the global L 2 stability
(H4). |
A signicant step in the above considerations was the assumption that there exist spaces
l (K) satisfying (H7) and (46). In the remaining part of this section, we will prove that
this is true for simplicial nite elements. Thus, from now on, we assume that there is only
one reference cell b
K, which is a d-simplex. It is essential for our further proceeding that,
for any simplicial cell K, there exists a regular ane mapping
K onto K. We denote the set of all these mappings by LR ( b
K;K).
above, we introduce a xed nite{dimensional space b
K) d and we assume
that, for any K 2 T l and any FK 2 LR ( b
K;K), we have
l (K) fbv F 1
We assume that, as usual, the cells of the triangulations are rened according to a nite
number of geometrical rules. Therefore, the renements of all cells K can be mapped by
the linear mappings F 1
K onto a nite number b L of renements of the reference cell b
K into
child cells b
L. That means that, for any cell K 2 T l obtained by
rening a parent cell F(K) 2 T l 1 and for any FF(K) 2 LR ( b
K;F(K)), there exist indices
such that
We denote
Then, for any v 2 P l 1
there exists b
ij such that
P and hence
Thus, for any K 2 T l and any FK 2 LR ( b
K;K), we have
l 1
for some indices i
g. Now, we dene
c
and, for any K 2 T l , we choose some FK 2 LR ( b
K;K) and set
l
It is easy to see that these spaces S l (K) satisfy the assumption (H7).
4 Applications to the Stokes Equations
In this section we give details how the general assumptions made in Section 3 can be
fullled for the Stokes problem. In particular, we will show that the usually used multi-grid
technique for the nonconforming nite elements of lowest order coincides with the use
of the general transfer operator described in the preceding section.
4.1 Lowest Order Nonconforming Elements
Our rst examples are the nonconforming nite elements of rst order on triangles and
quadrilaterals. The triangular element was introduced by Crouzeix and Raviart and analysed
in [15]. The element on quadrilaterals was established by Rannacher and Turek and
analysed in [24, 28].
We consider a hierarchy of uniformly rened grids. Let T 0 be a regular triangulation of
triangles or into convex quadrilaterals. The mesh T l is obtained from T l 1 by
subdividing each cell of T l 1 into four child cells. For triangles, we connect the midpoints
of the edges. In the quadrilateral case, we connect the midpoints of opposite edges.
Now we construct the nite element spaces V l . Let P 1 (K) be the space of linear polynomials
on the triangle K. The space of rotated bilinear functions on a quadrilateral K is
dened by
is the bilinear reference transformation from the reference cell ^
onto the cell K, see [24, 28]. Let E(K) denote the set of all edges of the element
K. We dene for any E 2 E(K) the nodal functional
Z
vj K ds:
K2T l
E(K) be the set of all edges of T l . The set E l
@
@
contains all boundary edges. The nite element space V l is given by
@
where P (K) is P 1 (K) on triangles and Q rot
1 (K) on quadrilaterals.
Since the triangulation T l is obtained from T l 1 by regular renement of all cells, we have
us show that the assumptions (H7) and (46) hold. In the case of triangles,
@
@
@
@
FK
Figure
1: Renement of original and reference cell
the inclusion P l 1 directly from the ane reference mapping
and we have P l 1 . Thus, we can choose S l
The situation is more complicated for the Q rot
element. Let us consider K
l be the child cells of K. We set ^
Fig. 1).
Let for example
which means that there is a function
K1
The mapping G
and FK
are bijective. Thus, we have FK 1
K1
Since the local space span(1; ^
2 ) is invariant with respect to the mapping G 1 we
conclude
P l (K 1 ). The same arguments can be applied to K i
which results in
l
l (K) Q rot
This allows us to choose S l
in the denition of the space l . Note that the
assumptions (46) and (47) are then fullled.
The nite element spaces V l 1 and V l are non{nested. In order to get a suitable prolongation
we will use the general transfer operator which was introduced in Section 3. The
denition of the global nodal functionals (43) simplies for the above spaces to
for all inner edges E with the neighbouring elements K and K 0 . For boundary edges we
get
E(K). The resulting mapping u is the same as in [19].
The space Q l which approximates the pressure consists of piecewise constant functions,
i.e.,
For the proof of assumption (H2) we refer to [15, 24]. Since Q l 1 Q l the transfer operator
p is the identity.
For numerical experiments with the Crouzeix-Raviart element we refer to [19].
4.2 Modied Crouzeix-Raviart Element
It is sometimes necessary to use other types of boundary conditions than the Dirichlet
boundary condition considered in (3). For example, if a part N of
@
represents a free
surface of a
uid, then one can use the boundary conditions
where I is the identity tensor and n is the outer normal vector to N . The rst condition
in (50) states that zero surface forces act in the tangential direction to N . The boundary
conditions (50) generally do not allow to use the bilinear form a dened in (4) and instead
we have to consider
Z
If meas d 1
(@
then the ellipticity of this bilinear form for functions from H
d
vanishing on
@
n N is assured by the Korn inequality. However, the discrete Korn
inequality does not hold for most rst order nonconforming nite element spaces (cf. [21]),
particularly, it fails for the elements investigated in the preceding section. Consequently,
in these cases, the validity of (H2) cannot be shown.
One of the few rst order nonconforming elements which do not violate the discrete
Korn inequality is the modied Crouzeix-Raviart element P mod
which was developed in
[23] for solving convection dominated problems. Here we conne ourselves to a particular
example of this element for which the space of shape functions on the reference triangle b
is
are the barycentric coordinates on b
K. To each edge b
K, we assign
two nodal functionals, b
E;1 and b
E;2 , dened by
Z
Z
E2
Figure
2: Coarsest grids (level 0).
E is a barycentric coordinate on b
E. The space V l is now obtained by transforming
the space b
P and the six nodal functionals b
K, onto the cells of the
triangulation by means of regular ane mappings (cf. the preceding section). In this way,
we get the space
Z
denotes the jump of v across the edge E,
and 1 , 2 , 3 are the barycentric coordinates on the cell K. A nice property of the P modelement is that Z
It is easy to verify that the assumptions made in Section 3 are satised. The assumption
holds if the space Q l consists of discontinuous piecewise linear functions from L 2
and each cell has at least one vertex
in
since then an inf-sup condition holds (cf. [22]
and [15]). Consequently, (H2) is also satised if Q l is a subspace of L 2
consisting
e.g. of piecewise constant functions, continuous piecewise linear functions or nonconforming
piecewise linear functions.
Here we present numerical results for Q l consisting of piecewise constant functions so
that the transfer operator p can be replaced by the identity. The discretisations are dened
on a sequence of uniformly rened triangular grids starting with the triangular grid from
Fig. 2 which we denote as T 0 (level 0). We shall consider two kinds of multi-level solvers for
solving the Stokes equations on a given geometrical mesh level. The rst one is a classical
multi-level method where each mesh corresponds to one level of the algorithm and on each
mesh we consider a discrete problem of the same type, i.e., the Stokes equations discretized
using the P mod
element. In the second multi-level solver, the discretisation using the
element is used on the nest mesh T L only and on all other mesh levels a 'cheaper'
discretisation is employed, namely the Crouzeix-Raviart element with piecewise constant
pressure denoted as P nc
In addition, the P nc
discretisation is also applied on the
nest mesh level T L so that two dierent discretisations corresponding to two levels of the
multi-level method are considered on the nest geometrical mesh. We refer to the next
section for more details on this technique. Note that both kinds of multi-level solvers t
into the framework of this paper and the prolongations and restrictions can be dened
using the general transfer operator u described above. The numbers of degrees of freedom
to which the mentioned discretisations lead on dierent meshes are given in Table 3.
As a smoother, we use the basic iteration described in Section 2.5. Therefore, the system
(13) has to be solved in each smoothing step which implies that the ecient solution of
is essential for the eciency of the multi-level solver. In view of an application of
our procedure to the Navier-Stokes equations, where A and D are no longer symmetric,
we solve (13) iteratively by a preconditioned
exible GMRES method, see e.g. [25]. The
preconditioner is dened via the pressure Schur complement equation of (13)
is the right-hand side of (13). First, (51) is solved approximately by some steps of the
standard GMRES method by Saad and Schultz [26] providing the approximation ~
p of
Second, an approximation ~
u of u j+1 u j is computed by
~
The solution of (13) is the most time consuming part of the multi-grid iteration. It
was shown by Zulehner [34] that the smoothing property of the Braess{Sarazin smoother
is maintained if (13) is solved only approximately as long as the approximation is close
enough to the solution. Numerical experiments show that, in general, one obtains similar
rates of convergence of the multi-level algorithm like with an exact solution of (13).
Considering the ecency of the multi-level solver measured in computing time, the variant
with the approximate solution is in general considerably better and therefore it is used in
practice. Thus, in the computations presented below we stopped the solution of (13) after
the Euclidean norm of the residual has been reduced by the factor 10. The systems on
level 0 were solved exactly.
We shall present numerical results for the following
Example 4.1 W-cycle in a 2d test case. We consider the Stokes problem
in
with the prescribed solution
This example is taken from the paper [5] by Braess and Sarazin.
The computations were performed using the W (m; m)-cycle,
1:0.
Table
1 shows the geometric means of the error reduction rates
l u new
l k 0
l k 0
for the multi-level solver which uses the P mod
discretisations on all levels. Table 2
shows the averaged error reduction rates for the multi-level solver which combines the use
of the P mod
discretisations on the nest level with the use of P nc
discretisations on
all lower levels. We observe that, for each m, the error reduction rates can be bounded by
a level-independent constant as predicted by Theorem 2.1. In addition, it can be clearly
seen from the two tables that, for m > 1, the multi-level method which uses the Crouzeix-
Raviart element on lower levels converges faster than the standard multi-level method.
Table
1: Example 4.1, P mod
averaged error
reduction rates.
mesh level
5 2.20e-1 2.87e-1 3.13e-1 3.13e-1 3.08e-1
6 1.70e-1 2.36e-1 2.62e-1 2.62e-1 2.59e-1
Table
2: Example 4.1, P mod
nest level combined with P nc
lower levels,
averaged error reduction rates.
mesh level
6 6.17e-2 1.27e-1 1.06e-1 1.01e-1 1.02e-1
4.3 A Multi-grid Method for Higher Order Discretisations Based
on Lowest Order Nonconforming Discretisations
As the last and certainly most important application, we consider a multi-grid method for
higher order discretisations which is based on lowest order nonconforming discretisations
on the coarser multi-grid levels. Like in the previous section, the hierarchy of this multi-grid
has two dierent discretisations on the nest geometric mesh level L, i.e. the spaces
for the multi-grid levels L and L+1 are dened both on T L , see Figure 3. The higher order
discretisation is used on multi-grid level on all coarser levels l, 0 l L, a
nonconforming discretisation of lowest order is applied.
level geometry multigrid discretization
higher order
low order nonconforming
low order nonconforming
low order nonconforming
low order nonconforming
Figure
3: The multi-level approach for higher order discretisations.
Remark 4.1 The construction of this multi-grid method was inspired by a numerical study
of a benchmark problem for the steady state Navier-Stokes equations in [18]. This study
shows on the one hand a dramatic improvement of benchmark reference values using higher
order discretisations in comparison to lowest order nonconforming discretisations. On the
other hand, the arising systems of equations for the lowest order nonconforming discretisations
could be solved very fast and eciently with the standard multi-grid approach. This
standard multi-grid approach showed a very unsatisfactory behaviour for all higher order
discretisations. Often, it did not converge at all. Sometimes, convergence could be achieved
by heavily damping, leading to a bad rate of convergence and a very inecient solver. These
diculties could be overcome by applying the multi-grid method described in this section.
Thus, this method has been proved already to be a powerful tool in the numerical solution
of Navier-Stokes equations combining the superior accuracy of higher order discretisations
and the eciency of multi-grid solvers for lowest order nonconforming discretisations.
In case of the Stokes problem, the proposed multi-grid method ts into the framework
of this paper. Between the lower levels l and l +1 of the multi-grid hierarchy, 0 l < L, we
Table
3: Degrees of freedom for the discretisations in the 2d tests cases, Example 4.1 and
4.2.
disc. mesh level
use the transfer operators which have been dened in Section 4.1. To dene the transfer
operator between the levels L and L have to be constructed
since in general V L 6 V L+1 and for some higher order discretisations also Q L 6 Q L+1 . The
construction can be done by applying the techniques from Section 3. We choose
where P L+1 (K) is the space of local shape functions of the corresponding nite element
spaces V L+1 and Q L+1 , respectively.
We will present numerical results obtained with this multi-grid technique for a number
of higher order nite element discretisations. Let P 0 and Q 0 denote the spaces of
piecewise constant functions on simplicial mesh cells and quadrilateral/hexahedral mesh
cells, respectively. By we denote the well-known nite element spaces of
continuous functions of piecewise k-th degree. The notation P disc
k is used for spaces of
discontinuous functions whose restriction to each mesh cell is a polynomial of degree k.
The Crouzeix-Raviart nite element space is again denoted by P nc
1 .
Three examples for the multi-level approach for higher order discretisations will be
considered. The rst example conrms the theoretical results of Theorem 2.1, i.e., we
consider a two-level method and solve the systems (13) arising in the smoothing process
exactly. The solution procedure was decribed in the preceding section. Then we consider
Example 4.1 from the previous section and Example 4.4 dened below to demonstrate the
behaviour of the multi-level W-cycle with approximated solutions of (13) for test problems
in 2d and 3d.
Example 4.2 Two-level method. This example has been designed to check the theoretically
predicted results with respect to the two-level method. We consider the Stokes
equations (3)
in
0, such that is the solution of (3). The
computations were carried out on a sequence of meshes starting with level 0 (see Fig. 2),
for which the corresponding numbers of the degrees of freedom are given in Table 3. The
discrete solution on each level is u we consider a two-level method,
there is only one mesh level in a specic computation. On this geometric mesh level, the
lowest order nonconforming discretisation of (3) denes the coarse level in the two-level
method and a higher order discretisation the ne level.
As initial guess of the two-level method for each computation, we have chosen u
for all interior degrees of freedom and p Thus, the initial error is smooth. The results
presented in Table 4 were obtained with 3 pre-smoothing steps, without post-smoothing,
and with Thus, we have exactly the situation investigated in Section 2.
The results of the numerical studies are given in Table 4 and Figure 4. Table 4 shows the
averaged error reduction rate for the two-level method applied to a number of higher order
discretisations. It can be clearly seen that the error reduction rate is for all discretisations
independent of the level as stated in Theorem 2.1. The second statement of Theorem
2.1, the decrease of the error reduction rate by O(1=m), where m is the number of pre-smoothing
steps, is illustrated for the P 3 =P 2 discretisation on mesh level 4 in Figure 4. The
situation is quite similar for the other higher order discretisations.
Table
4: Example 4.2, pre-smoothing steps, no post-smoothing,
averaged error reduction rates.
disc. mesh level
Example 4.3 Example 4.1 continued Now let us turn back to Example 4.1 formulated in
the preceding section. Table 5 shows the averaged error reduction rates obtained for higher
order discretisations with the W (1; 1)-cycle, solving
by the
exible GMRES method the reduction of the Euclidean norm of the residual by the
achieved with the rst
exible GMRES steps. We applied 10 iteration
steps each time in the GMRES method for solving the pressure Schur complement equation
(51).
Although system (13) is solved only approximately in each smoothing step, the results
in
Table
5 show averaged error reduction rates which are independent of the level.
number of pre smoothing steps
averaged
error
reduction
rate
Figure
4: Example 4.2, averaged error reduction rate for dierent numbers of pre-smoothing
steps, P 3 =P 2 discretisation, mesh level 4.
Table
5: Example 4.3, averaged error reduction
rates.
disc. mesh level
3 2.29e-1 2.27e-1 2.25e-1 2.21e-1 2.18e-1
Example 4.4 W-cycle in a 3d test case. This example demonstrates the behaviour of the
multi-level solver applied to a three-dimensional problem. We
consider
the prescribed solution given by
The constant c is chosen such that p 2 L 2
and the right hand side f is chosen such that
(u; p) full (3).
For discretisations based on hexahedra, the initial grid (level 0) was obtained by dividing
the unit cube into eight cubes of edge length 1=2 as indicated in Figure 5. The initial
Figure
5: Meshlevel 0 (left) and 1 (right)
grid for discretisations based on simplicial mesh cells consists of 48 tetrahedrons. The
corresponding numbers of degrees of freedom are given in Table 6. It is noteworthy that
sometimes a lower order nite element space possesses more degrees of freedom than a
higher order space on the same mesh level, e.g. compare P nc
Table
Degrees of freedom for the discretisations in the 3d test case, Example 4.4.
disc. mesh level
Table
7 presents results obtained with the W (3; 3)-cycle,
The saddle point problems in the smoothing process (13) were solved up to a reduction of
the Euclidean norm of the residual by a factor 10 4 or at most 20
exible GMRES iterations.
Computations with weaker stopping criteria did not lead to such clearly level-independent
rates of convergence as presented in Table 7. This behaviour corresponds to the theory by
Zulehner [34] which is valid if the approximation of the solution of (13) is close enough to
the solution.
4.4
Acknowledgement
The research of Petr Knobloch has been supported under the grants No. 201/99/P029 and
201/99/0267 of the Czech Grant Agency and by the grant MSM 113200007. The research
Table
7: Example 4.4, averaged error reduction rate
disc. mesh level
of Gunar Matthies has been partially supported under the grant FOR 301 of the DFG.
--R
A class of iterative methods for solving saddle point problems.
Analysis of some
A multigrid algorithm for the mortar
A multigrid method for nonconforming fe- discretisations with applications to non-matching grids
Multigrid Methods
An optimal-order multigrid method for P 1 nonconforming nite elements
A nonconforming multigrid method for the stationary Stokes equations.
On the convergence of Galerkin-multigrid methods for nonconforming nite ele- ments
On the convergence of nonnested multigrid methods with nested spaces on coarse grids.
Multigrid algorithms for the nonconforming and mixed methods for nonsymmetric and inde
Multigrid and multilevel methods for nonconforming Q 1 elements.
Conforming and nonconforming
Finite element methods for Navier-Stokes equations
Higher order
A coupled multigrid method for nonconforming
Numerical performance of smoothers in coupled multigrid methods for the parallel solution of the incompressible Navier-Stokes equations
On Korn's inequality for nonconforming
On the inf-sup condition for the P mod 1 element
The new nonconforming
Simple nonconforming quadrilateral Stokes element.
A exible inner-outer preconditioned GMRES algorithm
GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems.
Eine Klasse von e
Parallele L
A general transfer operator for arbitrary
Multigrid Methods for Finite Elements.
Multigrid techniques for a divergence-free nite element discretization
The derivation of minimal support basis functions for the discrete divergence operator.
A class of smoothers for saddle point problems.
--TR
GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems
Multigrid methods for nonconforming finite element methods
A flexible inner-outer preconditioned GMRES algorithm
The derivation of minimal support basis functions for the discrete divergence operator
An efficient smoother for the Stokes problem
Multigrid and multilevel methods for nonconforming <italic>Q</italic><subscrpt>1</subscrpt> elements
Multigrid Algorithms for Nonconforming and Mixed Methods for Nonsymmetric and Indefinite Problems
Convergence of nonconforming multigrid methods without full elliptic regularity
A multigrid method for nonconforming FE-discretisations with application to non-matching grids
A Multigrid Algorithm for the Mortar Finite Element Method
A coupled multigrid method for nonconforming finite element discretizations of the 2D-Stokes equation
A class of smoothers for saddle point problems
Finite Element Method for Elliptic Problems
--CTR
M. Jung , T. D. Todorov, Isoparametric multigrid method for reaction-diffusion equations on two-dimensional domains, Applied Numerical Mathematics, v.56 n.12, p.1570-1583, 12 December 2006 | finite element discretisation;multi-level method;stokes problem;mixed problems |
635802 | Approximating most specific concepts in description logics with existential restrictions. | Computing the most specific concept (msc) is an inference task that allows to abstract from individuals defined in description logic (DL) knowledge bases. For DLs that allow for existential restrictions or number restrictions, however, the msc need not exist unless one allows for cyclic concepts interpreted with the greatest fixed-point semantics. Since such concepts cannot be handled by current DL-systems. we propose to approximate the msc. We show that for the DL ALE, which has concept conjunction, a restricted form of negation, existential restrictions, and value restrictions as constructors, approximations of the msc always exist and can effectively be computed. | Introduction
The most specic concept (msc) of an individual b is a concept description
that has b as instance and is the least concept description (w.r.t. subsump-
tion) with this property. Roughly speaking, the msc is the concept description
that, among all concept descriptions of a given DL, represents b best. Closely
related to the msc is the least common subsumer (lcs), which, given concept
descriptions is the least concept description (w.r.t. subsumption)
subsuming Thus, where the msc generalizes an individual, the lcs
generalizes a set of concept descriptions.
In [2{4], the msc (rst introduced in [15]) and the lcs (rst introduced in
[5]) have been proposed to support the bottom-up construction of a knowledge
base. The motivation comes from an application in chemical process engineering
[17], where the process engineers construct the knowledge base (which consists of
descriptions of standard building blocks of process models) as follows: First, they
introduce several \typical" examples of a standard building block as individuals,
and then they generalize (the descriptions of) these individuals into a concept
description that (i) has all the individuals as instances, and (ii) is the most
This work was carried out while the author was still at the LuFG Theoretische
Informatik, RWTH Aachen, Germany.
specic description satisfying property (i). The task of computing a concept
description satisfying (i) and (ii) can be split into two subtasks: computing the
msc of a single individual, and computing the lcs of a given nite number of
concepts. The resulting concept description is then presented to the knowledge
engineer, who can trim the description according to his needs before adding it
to the knowledge base.
The lcs has been thoroughly investigated for (sublanguages of) Classic [5,
2, 12, 11], for DLs allowing for existential restrictions like ALE [3], and most
recently, for ALEN , a DL allowing for both existential and number restrictions
[13]. For all these DLs, except for Classic in case attributes are interpreted
as total functions [12], it has turned out that the lcs always exists and that
it can eectively be computed. Prototypical implementations show that the lcs
algorithms behave quite well in practice [7, 4].
For the msc, the situation is not that rosy. For DLs allowing for number
restrictions or existential restrictions, the msc does not exist in general. Hence,
the rst step in the bottom-up construction, namely computing the msc, cannot
be performed. In [2], it has been shown that for ALN , a sublanguage of
Classic, the existence of the msc can be guaranteed if one allows for cyclic
concept descriptions, i.e., concepts with cyclic denitions, interpreted by the
greatest xed-point semantics. Most likely, such concept descriptions would also
guarantee the existence of the msc in DLs with existential restrictions. However,
current DL-systems, like FaCT [10] and RACE [9], do not support this kind of
cyclic concept descriptions: although they allow for cyclic denitions of concepts,
these systems do not employ the greatest xed-point semantics, but descriptive
semantics. Consequently, cyclic concept descriptions returned by algorithms
computing the msc cannot be processed by these systems.
In this paper, we therefore propose to approximate the msc. Roughly speak-
ing, for some given non-negative integer k, the k-approximation of the msc of
an individual b is the least concept description (w.r.t. subsumption) among all
concept descriptions with b as instance and role depth at most k. That is, the
set of potential most specic concepts is restricted to the set of concept descriptions
with role depth bounded by k. For (sublanguages of) ALE we show that
k-approximations always exist and that they can eectively be computed. Thus,
when replacing \msc" by \k-approximation", the rst step of the bottom-up
construction can always be carried out. Although the original outcome of this
step is only approximated, this might in fact suce as a rst suggestion to the
knowledge engineer.
While for full ALE our k-approximation algorithm is of questionable practical
use (since it employs a simple enumeration argument), we propose improved
algorithms for the sublanguages EL and EL: of ALE . (EL allows for conjunction
and existential restrictions, and EL: additionally allows for a restricted form of
negation.) Our approach for computing k-approximations in these sublanguages
is based on representing concept descriptions by certain trees and ABoxes by
certain (systems of) graphs, and then characterizing instance relationships by
homomorphisms from trees into graphs. The k-approximation operation then
consists in unraveling the graphs into trees and translating them back into con-
Table
1. Syntax and semantics of concept descriptions.
Construct name Syntax Semantics EL EL: ALE
top-concept > x x x
conjunction C u D C I \ D I x x x
existential restriction 9r:C fx 2
primitive negation :P n P I x x
value restriction 8r:C fx 2
cept descriptions. In case the unraveling yields nite trees, the corresponding
concept descriptions are \exact" most specic concepts, showing that in this
case the msc exists. Otherwise, pruning the innite trees on level k yields k-
approximations of the most specic concepts.
The outline of the paper is as follows. In the next section, we introduce the
basic notions and formally dene k-approximations. To get started, in Section 3
we present the characterization of instance relationships in EL and show how
this can be employed to compute k-approximations or (if it exists) the msc. In
the subsequent section we extend the results to EL: , and nally deal with ALE
in Section 5. The paper concludes with some remarks on future work. Due to
space limitations, we refer to [14] for all technical details and complete proofs.
Preliminaries
Concept descriptions are inductively dened with the help of a set of construc-
tors, starting with disjoint sets NC of concept names and NR of role names. In
this work, we consider concept descriptions built from the constructors shown
in
Table
1, where r 2 NR denotes a role name, P 2 NC a concept name, and
C; D concept descriptions. The concept descriptions in the DLs EL, EL: , and
ALE are built using certain subsets of these constructors, as shown in the last
three columns of Table 1.
An ABox A is a nite set of assertions of the form (a; (role assertions) or
are individuals from a set N I (disjoint from
role name, and C is a concept description. An ABox is called
L-ABox if all concept descriptions occurring in A are L-concept descriptions.
The semantics of a concept description is dened in terms of an interpretation
The domain of I is a non-empty set of objects and the
interpretation function I maps each concept name P 2 NC to a set P I ,
each role name r 2 NR to a binary relation r I , and each individual
I to an element a I 2 such that a 6= b implies a I 6= b I (unique name
assumption). The extension of I to arbitrary concept descriptions is inductively
dened, as shown in the third column of Table 1. An interpretation I is a model
of an ABox A i it satises (a I ; b I I for all role assertions (a;
and a I 2 C I for all concept assertions a : C 2 A.
The most important traditional inference services provided by DL-systems
are computing the subsumption hierarchy and instance relationships. The concept
description C is subsumed by the concept description D (C v D) i
C I D I for all interpretations I; C and D are equivalent (C D) i they
subsume each other. An individual a 2 N I is an instance of C w.r.t. A (a 2A C)
i a I 2 C I for all models I of A.
In this paper, we are interested in the computation of most specic concepts
and their approximation via concept descriptions of limited depth. The
depth depth(C) of a concept description C is dened as the maximum of nested
quantiers in C. We also need to introduce least common subsumers formally.
Denition 1 (msc, k-approximation, lcs). Let A be an L-ABox, a an individual
in A, C; C
{ C is the most specic concept (msc) of a w.r.t. A (mscA (a)) i a 2A C,
and for all L-concept descriptions C 0 , a 2A C 0 implies C v C
{ C is the k-approximation of a w.r.t. A (msc k;A (a)) i a 2A C, depth(C)
k, and for all L-concept descriptions C 0 , a 2A C 0 and depth(C 0 ) k imply
{ C is the least common subsumer (lcs) of C
descriptions C
for all
Note that by denition, most specic concepts, k-approximations, and least common
subsumers are uniquely determined up to equivalence (if they exist).
The following example shows that in DLs allowing for existential restrictions
the msc of an ABox-individual b need not exist.
Example 1. Let L be one of the DLs EL, EL: , or ALE . Consider the L-ABox
rg. It is easy to see that, for each n 0, b is an instance of the
L-concept description
Cn := 9r: 9r
| {z }
times
The msc of b can be written as the innite conjunction un0 Cn , which, however,
cannot be expressed by a (nite) L-concept description. As we will see, the k-
approximation of b is the L-concept description u 0nk Cn .
3 Most specic concepts in EL
In the following subsection, we introduce the characterization of instance relationships
in EL, which yields the basis for the algorithm computing k-approxi-
mations (Section 3.2). All results presented in this section are rather straightfor-
ward. However, they prepare the ground for the more involved technical problems
one encounters for EL: .
3.1 Characterizing instance in EL
In order to characterize instance relationships, we need to introduce description
graphs (representing ABoxes) and description trees (representing concept
descriptions). For EL, an EL-description graph is a labeled graph of the form
whose edges vrw 2 E are labeled with role names r 2 NR and
whose nodes v 2 V are labeled with sets '(v) of concept names from NC . The
empty label corresponds to the top-concept. An EL-description tree is of the
is an EL-description graph which is a tree
with root v
EL-concept descriptions can be turned into EL-description trees and vice
versa [3]: Every EL-concept description C can be written (modulo equivalence)
as f>g. Such a
concept description is recursively translated into an EL-description tree
follows: The set of all concept names P i occurring on the top-level
of C yields the label '(v 0 ) of the root v 0 , and each existential restriction
yields an r i -successor that is the root of the tree corresponding to C i . For
example, the concept description
yields the description tree depicted on the left hand side of Figure 1.
Every EL-description tree translated into an EL-concept
description CG as follows: the concept names occurring in the label of v 0 yield
the concept names in the top-level conjunction of C G , and each v 0
an existential restriction 9r:C, where C is the EL-concept description obtained
by translating the subtree of G with root v. For a leaf v 2 V , the empty label is
translated into the top-concept.
Adapting the translation of EL-concept descriptions into EL-description trees,
an EL-ABox A is translated into an EL-description graph G(A) as follows: Let
Ind(A) denote the set of all individuals occurring in A. For each a 2 Ind(A),
let C a := u a:D2A D, if there exists a concept assertion a : D 2 A, otherwise,
C a := >. Let G(C a denote the EL-description tree obtained
from C a . (Note that the individual a is dened to be the root of G(C a ); in
particular, a 2 V a .) W.l.o.g. let the sets V a , a 2 Ind(A) be pairwise disjoint.
Then, is dened by
{ V :=
{
{ '(v) := ' a (v) for v 2 V a .
As an example, consider the EL-ABox
The corresponding EL-description graph G(A) is depicted on the right hand side
of
Figure
1.
r s
s r
r
r
s
s r
r
s
Fig. 1. The EL-description tree of C and the EL-description graph of A.
Now, an instance relationship, a 2A C, in EL can be characterized in terms
of a homomorphism from the description tree of C into the description graph
of A: a homomorphism from an EL-description tree into an
EL-description graph VH such that
Theorem 1. [14] Let A be an EL-ABox, a 2 Ind(A) an individual in A, and
C an EL-concept description. Let G(A) denote the EL-description graph of A
and G(C) the EL-description tree of C. Then, a 2A C i there exists a homomorphism
' from G(C) into G(A) such that '(v 0 is the root of
G(C).
In our example, a is an instance of C, since mapping v 0 on a, v i on w i ,
and v 3 on b and v 4 on c yields a homomorphism from G(C) into G(A).
Theorem 1 is a special case of the characterization of subsumption between
simple conceptual graphs [6], and of the characterization of containment of conjunctive
queries [1]. In these more general settings, testing for the existence of
homomorphisms is an NP-complete problem. In the restricted case of testing homomorphisms
mapping trees into graphs, the problem is polynomial [8]. Thus,
as corollary of Theorem 1, we obtain the following complexity result.
Corollary 1. The instance problem for EL can be decided in polynomial time.
Theorem 1 generalizes the following characterization of subsumption in EL introduced
in [3]. This characterization uses homomorphisms between description
trees, which are dened just as homomorphisms from description trees into description
graphs, but where we additionally require to map roots onto roots.
Theorem 2. [3] Let C, D be EL-concept descriptions, and let G(C) and G(D)
be the corresponding description trees. Then, C v D i there exists a homomorphism
from G(D) into G(C).
3.2 Computing k-approximations in EL
In the sequel, we assume A to be an EL-ABox, a an individual occurring in
A, and k a non-negative integer. Roughly, our algorithm computing msc k;A (a)
works as follows: First, the description graph G(A) is unraveled into a tree with
root a. This tree, denoted T (a; G(A)), has a nite branching factor, but possibly
innitely long paths. Pruning all paths to length k yields an EL-description
tree T k (a; G(A)) of depth k. Using Theorem 1 and Theorem 2, one can show
that the EL-concept description C Tk (a;G(A)) is equivalent to msc k;A (a). As an
immediate consequence, in case T (a; G(A)) is nite, C T (a;G(A)) yields the msc of
a. In what follows, we dene T (a; G(A)) and T k (a; G(A)), and prove correctness
of our algorithm.
First, we need some notations: For an EL-description graph
is a path from v 0 to vn of length and with label
n. The empty path
in which case the label of p is the empty word ". The node v n is an r 1 r n -
successor of v 0 . Every node is an "-successor of itself. A node v is reachable from
there exists a path from v 0 to v. The graph G is cyclic, if there exists a
non-empty path from a node in G to itself.
Denition 2. Let and a 2 V . The tree T (a; G) of a w.r.t. A is
dened by T (a;
{ V T := fp j p is a path from a to some node in Gg,
{
{ ' T (p) := '(v) if p is a path to v.
For IN, the tree T k (a; G) of a w.r.t. G and k is dened by T k (a; G) :=
{ V T
{
{ ' T
k .
Now, we can show the main theorem of this section.
Theorem 3. Let A be an EL-ABox, a 2 Ind(A), and k 2 IN. Then, C
is the k-approximation of a w.r.t. A. If, starting from a, no cyclic path in A can
be reached (i.e., T (a; G(A)) is nite), then C T (a;G(A)) is the msc of a w.r.t. A;
otherwise no msc exists.
Proof sketch. Obviously, there exists a homomorphism from T k (a; G(A)), a tree
isomorphic to G(C with a mapped on a. By Theorem 1,
this implies a 2A C
Let C be an EL-concept description with a 2A C and depth(C) k. Theorem
1 implies that there exists a homomorphism ' from G(C) into G(A). Given ',
it is easy to construct a homomorphism from G(C) into T k (a; G(A)). Thus, with
Theorem 2, we conclude C C. Altogether, this shows that C
is a k-approximation as claimed.
Now, assume that, starting from a, a cycle can be reached in A, that is,
(a; G(A)) is innite. Then, we have a decreasing chain C
k-approximations C k ( C increasing depth k, k 0. From
Theorem 2, we conclude that there does not exist an EL-concept description
subsumed by all of these k-approximations (since such a concept description
only has a xed and nite depth). Thus, a cannot have an msc.
Conversely, if T (a; G(A)) is nite, say with depth k, from the observation
that all k 0 -approximations, for k 0 k, are equivalent, it immediately follows
that C T (a;G(A)) is the msc of a. ut
Obviously, there exists a deterministic algorithm computing the k-approximation
(i.e., C The size jAj of A is dened by
a:C2A
where the size jCj of C is dened as the sum of the number of occurrences of
concept names, role names, and constructors in C. Similarly, one obtains an
exponential complexity upper bound for computing the msc (if it exists).
Corollary 2. For an EL-ABox A, an individual a 2 Ind(A), and k 2 IN, the k-
approximation of a w.r.t. A always exists and can be computed in time O(jAj k ).
The msc of a exists i starting from a no cycle can be reached in A. The
existence of the msc can be decided in polynomial time, and if the msc exists, it
can be computed in time exponential in the size of A.
In the remainder of this section, we prove that the exponential upper bounds
are tight. To this end, we show examples demonstrating that k-approximations
and the msc may grow exponentially.
Example 2. Let sg. The EL-description graph G(A) as well
as the EL-description trees T 1 (a; G(A)) and T 2 (a; G(A)) are depicted in Figure 2.
It is easy to see that, for k 1, T k (a; G(A)) yields a full binary tree of depth k
where
{ each node is labeled with the empty set, and
{ each node except the leaves has one r- and one s-successor.
By Theorem 3, C Tk (a;G(A)) is the k-approximation of a w.r.t. A. The size of
Moreover, it is not hard to see that there does not exist an
EL-concept description C which is equivalent to but smaller than C
The following example illustrates that, if it exists, also the msc can be of exponential
size.
Example 3. For n
Obviously, An is acyclic, and the size of An is linear in n. By Theorem 3,
is the msc of a 1 w.r.t. An . It is easy to see that, for each n,
coincides with the tree obtained in Example 2. As before we obtain
that
{ C T (a1 ;G(A)) is of size exponential in jAn j; and
{ there does not exist an EL-concept description C equivalent to but smaller
than C T (a1 ;G(A)) .
r s
r s
Fig. 2. The EL-description graph and the EL-description trees from Example 2.
Summarizing, we obtain the following lower bounds.
Proposition 1. Let A be an EL-ABox, a 2 Ind(A), and k 2 IN.
{ The size of mscA;k (a) may grow with jAj k .
{ If it exists, the size of mscA (a) may grow exponentially in jAj.
Most specic concepts in EL:
Our goal is to obtain a characterization of the (k-approximation of the) msc in
EL: analogously to the one given in Theorem 3 for EL. To achieve this goal, rst
the notions of description graph and of description tree are extended from EL to
EL: by allowing for subsets of NC [f:P j P 2 NC g[f?g as node labels. Just as
for EL, there exists a 1{1 correspondence between EL: -concept descriptions and
EL: -description trees, and an EL:-ABox A is translated into an EL: -description
graph G(A) as described for EL-ABoxes. The notion of a homomorphism also
remains unchanged for EL: , and the characterization of subsumption extends to
EL: by just considering inconsistent EL: -concept descriptions as a special case:
there exists a homomorphism ' from G(D) into G(C).
Second, we have to cope with inconsistent EL:-ABoxes as a special case: for
an inconsistent ABox A, a 2A C is valid for all concept descriptions C, and
hence, mscA (a) ?. However, extending Theorem 1 with this special case does
not yield a sound and complete characterization of instance relationships for
EL: . If this was the case, we would get that the instance problem for EL: is
in P, in contradiction to complexity results shown in [16], which imply that the
instance problem for EL: is coNP-hard.
The following example is an abstract version of an example given in [16]; it
illustrates incompleteness of a nave extension of Theorem 1 from EL to EL: .
Example 4. Consider the EL: -concept description
the EL:-ABox
rg; G(A) and G(C) are depicted in Figure 3. Obviously, there does not exist
r
r
r
r
r
r
Fig. 3. The EL:-description graph and the EL:-description tree from Example 4.
a homomorphism ' from G(C) into G(A) with '(w 0 neither
For each model I of A, however, either b I
I or
I
I , and in fact, a I 2 C I . Thus, a is an instance of C w.r.t. A though
there does not exist a homomorphism ' from G(C) into G(A) with '(w 0
In the following section, we give a sound and complete characterization of instance
relationships in EL: , which again yields the basis for the characterization
of k-approximations given in Section 4.2.
4.1 Characterizing instance in EL:
The reason for the problem illustrated in Example 4 is that, in general, for the
individuals in the ABox it is not always xed whether they are instances of
a given concept name or not. Thus, in order to obtain a sound and complete
characterization analogous to Theorem 1, instead of G(A), one has to consider
all so-called atomic completions of G(A).
Denition 3 (Atomic completion). Let be an EL: -description
graph and N
'(v)g. An
EL: -description graph G ) is an atomic completion of G if, for all
1. '(v) ' (v),
2. for all concept names P 2 N
Note that by denition, all labels of nodes in completions do not contain a con-
ict, i.e., the nodes are not labeled with a concept name and its negation. In
particular, if G has a con
icting node, then G does not have a completion. It is
easy to see that an EL:-ABox A is inconsistent i G(A) contains a con
icting
node. For this reason, in the following characterization of the instance relation-
ship, we do not need to distinguish between consistent and inconsistent ABoxes.
Theorem 4. [14] Let A be an EL:-ABox, the corresponding
description graph, C an EL: -concept description,
the corresponding description tree, and a 2 Ind(A). Then, a 2A C i for each
atomic completion G(A) of G(A), there exists a homomorphism ' from G(C)
into G(A) with '(w 0
The problem of deciding whether there exists an atomic completion G(A) such
that there exists no homomorphism from G(C) into G(A) is in coNP. Adding
the coNP-hardness result obtained from [16], this shows
Corollary 3. The instance problem for EL: is coNP-complete.
4.2 Computing k-approximations in EL:
Not surprisingly, the algorithm computing the k-approximation/msc in EL does
not yield the desired result for EL: . For instance, in Example 4, we would get
But as we will see, mscA (a)
As in the extension of the characterization of instance relationships from
EL to EL: , we have to take into account all atomic completions instead of the
single description graph G(A). Intuitively, one has to compute the least concept
description for which there exists a homomorphism into each atomic completion
of G(A). In fact, this can be done by applying the lcs operation on the set of all
concept descriptions C obtained from the atomic completions G(A)
of G(A).
Theorem 5. Let A be an EL:-ABox, a 2 Ind(A), and k 2 IN. If A is inconsis-
tent, then msc k;A (a) mscA (a) ?. Otherwise, let be the
set of all atomic completions of G(A).
Then, lcs(C If, starting from a, no
cycle can be reached in A, then lcs(C T
otherwise the msc does not exist.
Proof sketch. Let A be a consistent EL:-ABox and the atomic
completions of G(A). By denition of C there exists a homomorphism
i from C denote the lcs of
g. The characterization of subsumption for EL:
yields homomorphisms ' i from G(C k ) into G(C
Now it is easy to see that i ' i yields a homomorphism from G(C k ) into
mapping the root of G(C k ) onto a. Hence, a 2A C k .
Assume C 0 with depth(C 0 ) k and a 2A C 0 . By Theorem 4, there exist
homomorphisms i from G(C 0 ) into G(A) i for all 1 i n, each mapping the
root of G(C 0 ) onto a. Since depth(C 0 ) k, these homomorphisms immediately
yield homomorphisms 0
i from G(C 0 ) into G(C
the characterization of subsumption yields C
and hence C k v C 0 . Thus, C k msc k;A (a).
Analogously, in case starting from a, no cycle can be reached in A, we conclude
(a). Otherwise, with the same argument
as in the proof of Theorem 3, it follows that the msc does not exist. ut
In Example 4, we obtain two atomic completions, namely
fPg, and G(A) 2 with ' 2 (b 2 f:Pg. Now Theorem 5 implies mscA (a)
which is equivalent to
The examples showing the exponential blow-up of the size of k-approx-
imations and most specic concepts in EL can easily be adapted to EL: . However,
we only have a double exponential upper bound (though we strongly conjecture
that the size can again be bounded single-exponentially): the size of each tree
(and the corresponding concept descriptions) obtained from an atomic completion
is at most exponential, and the size of the lcs of a sequence of EL: -concept
descriptions can grow exponentially in the size of the input descriptions [3].
Moreover, by an algorithm computing the lcs of the concept descriptions
obtained from the atomic completions, the k-approximation (the msc) can be
computed in double exponential time.
Corollary 4. Let A be an EL:-ABox, a 2 Ind(A), and k 2 IN.
{ The k-approximation of a always exists. It may be of size jAj k and can be
computed in double-exponential time.
{ The msc of a exists i A is inconsistent, or starting from a, no cycle can be
reached in A. If the msc exists, its size may grow exponentially in jAj, and
it can be computed in double-exponential time. The existence of the msc can
be decided in polynomial time.
5 Most specic concepts in ALE
As already mentioned in the introduction, the characterization of instance relationships
could not yet be extended from EL: to ALE . Since these structural characterizations
were crucial for the algorithms computing the (k-approximation of
the) msc in EL and EL: , no similar algorithms for ALE can be presented here.
However, we show that
1. given that NC and NR are nite sets, the msc k;A (a) always exists and can
eectively be computed (cf. Theorem 6);
2. the characterization of instance relationships in EL is also sound for ALE
(cf. Lemma 1), which allows for approximating the k-approximation; and
3. we illustrate the main problems encountered in the structural characterization
of instance relationships in ALE (cf. Example 5).
The rst result is achieved by a rather generic argument. Given that the signa-
ture, i.e., the sets NC and NR , are xed and nite, it is easy to see that also
the set of ALE-concept descriptions of depth k built using only names from
NC [NR is nite (up to equivalence) and can eectively be computed. Since the
instance problem for ALE is known to be decidable [16], enumerating this set
and retrieving the least concept description which has a as instance, obviously
yields an algorithm computing msc k;A (a).
Theorem 6. Let NC and NR be xed and nite, and let A be an ALE-ABox built
over a set N I of individuals and NC [NR . Then, for k 2 IN and a 2 Ind(A), the
k-approximation of a w.r.t. A always exists and can eectively be computed.
Note that the above argument cannot be adapted to prove the existence
of the msc for acyclic ALE-ABoxes unless the size of the msc can be bounded
appropriately. Finding such a bound remains an open problem.
The algorithm sketched above is obviously not applicable in real applications.
Thus, in the remainder of this section, we focus on extending the improved
algorithms obtained for EL and EL: to ALE .
5.1 Approximating the k-approximation in ALE
We rst have to extend the notions of description graph and of description
tree from EL: to ALE : In order to cope with value restrictions occurring in
ALE-concept descriptions, we allow for two types of edges, namely those labeled
with role names r 2 NR (representing existential restrictions of the form 9r:C)
and those labeled with 8r (representing value restrictions of the form 8r:C).
Again, there is a 1{1 correspondence between ALE-concept descriptions and ALE-
description trees, and an ALE-ABox A is translated into an ALE-description
graph G(A) just as described for EL-ABoxes. The notion of a homomorphism also
extends to ALE in a natural way. A homomorphism ' from an ALE-description
into an ALE-description graph is a
mapping satisfying the conditions (1) and (2) on homomorphisms
between EL-description trees and EL-description graphs, and additionally (3)
We are now ready to formalize soundness of the characterization of instance
relationships for ALE .
Lemma 1. [14] Let A be an ALE-ABox with C an ALE-concept
description with
If there exists a homomorphism ' from GC into G(A) with '(v 0
a 2A C.
As an immediate consequence of this lemma, we get a 2A C
k 0, where the trees T (a; G(A)) and T k (a; G(A)) are dened just as for EL.
This in turn yields msc k;A (a) v C hence, an algorithm computing
an approximation of the k-approximation for ALE . In fact, such approximations
already turned out to be quite usable in our process engineering application [4].
The following example now shows that the characterization is not complete
for ALE , and that, in general, C In particular, it demonstrates
the diculties one encounters in the presence of value restrictions.
Example 5. Consider the ALE-ABox
A
and the ALE-concept description are
depicted in Figure 4. Note that G(A) is the unique atomic completion of itself
(w.r.t.
r r
r
r
s
r
s
r
Fig. 4. The ALE-description graph and the ALE-description tree from Example 5.
It is easy to see that there does not exist a homomorphism ' from G(C) into
However, a 2A C: For each model I of A, either b I
does
not have an s-successor, or at least one s-successor. In the rst case, b I
and hence b I
2 yields the desired r-successor of a I in (8s:P u 9r:9s:>) I . In the
second case, it is b I
I , and hence b I
1 yields the desired r-successor of
a I . Thus, for each model I of A, a I 2 C I .
Moreover, for
9r:(P u 9s:P ))) u 9r:(P u 9r:(P u 9s:P )). It is easy to see that C T4 (a;A) 6v C.
Hence, C T4 (a;A) u C < C T4 (a;A) , which implies msc 4;A (a) < C T4 (a;A) .
Intuitively, the above example suggests that, in the denition of atomic com-
pletions, one should take into account not only (negated) concept names but
also more complex concept descriptions. However, it is not clear whether an appropriate
set of such concept descriptions can be obtained just from the ABox
and how these concept descriptions need to be integrated in the completion in
order to obtain a sound and complete structural characterization of instance
relationships in ALE .
6 Conclusion
Starting with the formal denition of the k-approximation of msc we showed
that, for ALE and a nite signature (NC ; NR ), the k-approximation of the msc
of an individual b always exists and can eectively be computed. For the sublanguages
EL and EL: , we gave sound and complete characterizations of instance
relationships that lead to practical algorithms. As a by-product, we obtained
a characterization of the existence of the msc in EL-/EL:-ABoxes, and showed
that the msc can eectively be computed in case it exists.
First experiments with manually computed approximations of the msc in
the process engineering application were quite encouraging [4]: used as inputs
for the lcs operation, i.e., the second step in the bottom-up construction of the
knowledge base, they led to descriptions of building blocks the engineers could
use to rene their knowledge base. In next steps, the run-time behavior and the
quality of the output of the algorithms presented here is to be evaluated by a
prototype implementation in the process engineering application.
--R
Foundations of Databases.
Building and structuring description logic knowledge bases using least common subsumers and concept analysis.
Computing least common subsumers in description logics.
Conceptual graphs: Fundamental notions.
Learning the classic description logic: Theoretical and experimental results.
Computers and Intractability: A Guide to the Theory of NP-Completeness
Using an expressive description Logic: FaCT or
Reasoning and Revision in Hybrid Representation Systems.
On the complexity of the instance checking problem in concept languages with existential quanti
ROME: A repository to support the integration of models over the lifecycle of model-based engineering processes
--TR
On the complexity of the instance checking problem in concept languages with existential quantification
Foundations of Databases
Computers and Intractability
Building and Structuring Description Logic Knowledge Bases Using Least Common Subsumers and Concept Analysis
Computing Least Common Subsumers in Description Logics with Existential Restrictions
Computing the Least Common Subsumer and the Most Specific Concept in the Presence of Cyclic ALN-Concept Descriptions | description logics;non-standard inferences;most specific concepts |
635905 | An efficient direct solver for the boundary concentrated FEM in 2D. | The boundary concentrated FEM, a variant of the hp-version of the finite element method, is proposed for the numerical treatment of elliptic boundary value problems. It is particularly suited for equations with smooth coefficients and non-smooth boundary conditions. In the two-dimensional case it is shown that the Cholesky factorization of the resulting stiffness matrix requires O(Nlog4N) units of storage and can be computed with O(Nlog8N) work, where N denotes the problem size. Numerical results confirm theoretical estimates. | Introduction
The recently introduced boundary concentrated nite element method of [8] is a numerical
method that is particularly suited for solving elliptic boundary value problems with the following
two properties: a) the coe-cients of the equations are analytic so that, by elliptic regularity,
the solution is globally, the solution has low Sobolev regularity due to, for example,
boundary conditions with low regularity or non-smooth geometries. The boundary concentrated
FEM exploits interior regularity in the framework of the hp-version of the nite element method
(hp-FEM) by using special types of meshes and polynomial degree distributions: meshes that
are strongly rened toward the boundary (see Figs. 5, 6 for typical examples) are employed in
order to cope with the limited regularity near the boundary; away from the boundary where
the solution is smooth, high approximation order is used on large elements. In fact, judiciously
linking the approximation order to the element size leads to optimal approximation results (see
Theorem 2.4 and Remark 2.6 for the precise notion of optimality).
In the present paper, we focus on the boundary concentrated FEM in two space dimensions and
present a scheme for the Cholesky factorization of the resulting stiness matrix that requires
O(N log 4 N) units of storage and O(N log 8 N) work; here, N is the problem size. The key to
this e-cient Cholesky factorization scheme is an algorithm that numbers the unknowns such
that the prole of the stiness matrix is very small (see Fig. 1 for a typical sparsity pattern of
the Cholesky factor). Numerical examples conrm our complexity estimates.
The boundary concentrated FEM can be used to realize a fast (i.e., with linear-logarithmic
complexity) application of discrete Poincare-Steklov and Steklov-Poincare operators as we will
discuss in Section 2.5. This use of the boundary concentrated FEM links it to the classical
boundary element method (BEM). Indeed, it may be regarded as a generalization of the BEM:
While the BEM is eectively restricted to equations with constant coe-cients, the boundary
concentrated FEM is applicable to equations with variable coe-cients yet retains the rate of
convergence of the BEM. Since the Cholesky factorization of the stiness matrix allows for an
exact, explicit data-sparse representation of boundary operators such as the Poincare-Steklov
operator with linear-logarithmic complexity, the boundary concentrated FEM provides a sparse
direct solver for the 2D BEM (because it directly computes the full set of Cauchy data on the
boundary corresponding to L-harmonic functions) that has almost linear complexity. It is also
a new alternative to modern matrix compression techniques now used in BEM.
We present a Cholesky factorization scheme for the boundary concentrated FEM in two dimen-
sions, that is, a direct method. We mention that iterative methods for solving the system of
linear equations arising in the boundary concentrated FEM are considered in [8]. Depending
on the boundary conditions considered, dierent preconditioners are required for an e-cient
iterative solution method. For example, while for Dirichlet problems the condition number
of the stiness matrix grows only polylogarithmically with the problem size, [8], Neumann
problems require more eective preconditioning. A suitable preconditioner, which has block-diagonal
structure, was proposed in [8]. We point out, however, that an application of this
preconditioner requires an inner iteration making our direct solver an attractive option.
We conclude this introduction by mentioning that, although we restrict our exposition to the
case of symmetric positive denite problems, our procedure can be extended to non-symmetric
problems by constructing the LU-decomposition rather than the Cholesky factorization.
Boundary Concentrated FEM
In this section, we present a brief survey of the boundary concentrated FEM. We refer to [8]
for a detailed description of this technique and complete proofs. For the sake of concreteness,
we discuss Dirichlet problems although other types of boundary conditions such as Neumann
or mixed boundary conditions can be treated analogously.
2.1 Problem class and abstract Galerkin FEM
For a polygonal Lipschitz
domain
we consider the Dirichlet problem
in
@
where the dierential operator L is given by
with uniformly (in x 2
symmetric positive denite matrix
i;j=1 . Moreover, in the
boundary concentrated FEM, we assume that A and the scalar functions a are analytic on
The operator
is the trace operator that restricts functions
on
to
the boundary @ We assume that the operator L generates an H
i.e.,
The boundary value problem (2.1) is understood in the usual, variational sense. That is, solving
is equivalent to the problem:
Find
with
Z
The standard FEM is obtained from the weak formulation (2.5) by replacing the space H
withanitedimensionalsubspace VN H
8 For the Dirichlet problem (2.1), we introduce
the trace space
@
For an approximation N 2 YN to we can then dene the FEM for (2.5) as follows:
Find uN 2 VN s.t.
Z
The coercivity assumption (2.4) ensures existence of the nite element approximation uN .
Furthermore, by Cea's Lemma there is a C > 0 independent of VN such that uN satises
C inf
In practice, the approximations N are obtained with the aid of the L 2 -projection operator
YN by setting
the function QN is dened
by
In the next section, we specify the approximation spaces VN , the proper choice of which is
intimately linked to the regularity properties of the solution u of (2.1). The analyticity of
the data A, a 0 , and f implies by interior regularity that the solution u is analytic on
if
furthermore
for some - 2 (0; 1], then the blow-up of higher order derivatives near
the boundary can be characterized precisely in terms of so-called countably normed spaces
(see [8] for the details). This regularity allows us to prove an optimal error estimate for the
boundary concentrated FEM in Theorem 2.4 below.
2.2 Geometric meshes and linear degree vectors
For ease of exposition, we will restrict our attention to regular triangulations (i.e., no hanging
nodes) consisting of a-ne triangles. (We refer, for example, to [?] for the precise denition
of regular triangulations.) We emphasize, however, that an extension to quadrilateral and
curvilinear elements is possible. The triangulation of the
domain
consists of
elements K, each of which is the image FK
K) of the equilateral reference triangle
under the a-ne map FK . We furthermore assume that the triangulation T is
-shape-regular,
i.e.,
Here, hK denotes the diameter of the element K. Of particular importance to us will be
geometric meshes, which are strongly rened meshes near the boundary @
Denition 2.1 (geometric mesh) A
-shape-regular (cf. (2.10)) mesh T is called a geometric
mesh with boundary mesh size h if there exist c 1 , c 2 > 0 such that for all K
1. if K \
@
2. if K \
@
dist (x; @ hK c 2 sup
dist (x; @
Typical examples of geometric meshes are depicted in Figs. 5, 6. Note that the restriction to
the boundary
@
of a geometric mesh is a quasi-uniform mesh, which justies speaking of a
\boundary mesh size h".
In order to dene hp-FEM spaces on a mesh T , we associate a polynomial degree p K 2 N with
each element K, collect these p K in the polynomial degree vector p := (p K ) K2T and set
where for p 2 N we introduce the space of all polynomials of degree p as
The linear degree vector is a particularly useful polynomial degree distribution in conjunction
with geometric meshes:
Denition 2.2 (linear degree vector) Let T be a geometric mesh with boundary mesh size
h in the sense of Denition 2.1. A polynomial degree vector is said to be a linear
degree vector with slope > 0 if
An important observation about geometric meshes and linear degree vectors is that the dimension
dimS
of the space S
proportional to the number of points
on the boundary:
Proposition 2.3 ([8]) Let T be a geometric mesh with boundary mesh size h. Let p be a
linear degree vector with slope > 0 on T . Then there exists C > 0 depending only on the
shape-regularity constant
and the constants c 1 , c 2 , of Denitions 2.1, 2.2 such that
dimS
K2T
K2T
2.3 Error and complexity estimates
We formulate an approximation result for the hp-FEM on geometric meshes applied to (2.1):
Theorem 2.4 ([8]) Let u be the solution to (2.1) with coe-cients A, a 0 , and right-hand side f
analytic
on
. Assume additionally that u 2 H
for some - 2 (0; 1). Let T be a geometric
mesh with boundary mesh size h and let p be a linear degree vector on T with slope > 0.
Then the FE solution uN given by (2.7) satises
C
The constants C, b > 0 depend only on the shape-regularity constant
, the constants c 1 , c 2
appearing in Denition 2.1, the data A, c, f
, and -, kuk H
. For su-ciently large the
boundary concentrated FEM achieves the optimal rate of convergence
number of boundary points:
Remark 2.5 Theorem 2.4 is formulated for the Dirichlet problem (2.1). Analogous approximation
results hold for Neumann or mixed boundary conditions as well.
Remark 2.6 Theorem 2.4 asserts a rate O(n - ) for the boundary concentrated FEM, where
This rate is optimal in the following sense: Setting for - 2 (0; 1)
on
we can introduce the n-width
En
sup
v2En
where the rst inmum is taken over all subspaces E n H
of dimension n. It can then be
shown that Cn - d n for some C > 0 independent of n.
2.4 Shape functions and stiness matrix
In order to convert the variational formulation (2.7) into a system of linear equations, a basis
of the nite element space S
has to be chosen. Several choices of basis
functions (\shape are standard in hp-FEM, [2, 12, 7]. Their common feature is
that the shape functions can be associated with the topological entities \vertices," \edges,"
and \elements" of the triangulation T . This motivates us to introduce the following notion of
\standard" bases:
Denition 2.7 A basis B of S
P(
said to be a standard hp-FEM basis if each shape
function exactly one of the following three categories:
1. vertex shape functions: ' is a vertex shape function associated with vertex V if supp '
consists of all elements that have V as a vertex;
2. edge shape functions: ' is an edge shape function associated with edge e of T if supp '
consists of the (at most two) elements whose edge includes e;
3. internal shape functions: ' is an internal shape function associated with element K if
For a standard basis in this sense, we assign spatial points, called nodes, to degrees of freedom
as follows:
1. we assign to the shape function associated with vertex V the point
2. we assign to the side shape functions associated with edge e the midpoint of e;
3. we assign to the internal shape functions associated with element K the barycenter of K.
One example of a standard basis in the sense of Denition 2.7 is obtained by assembling (see,
e.g., [2, 12, 7]) the so-called hierarchical shape functions:
Example 2.8 We construct a basis of the space S
in two steps: In the rst step, we
dene shape functions on the reference element ^
K. In the second step, we dene the basis of
by an assembling process.
1. step: Dene one-dimensional shape functions i on the reference interval ( 1; 1) by
where the functions L i are the standard Legendre polynomials and the scaling factors c i are
given by c
Denote by v i , the three vertices of ^
K and by i , the three edges (we
assume polynomial degrees that we associate
with the edges i and let p 2 N be the polynomial degree of the internal shape functions. We
then dene vertex shape functions V, side shape functions
functions I as follows:
V := the usual linear nodal shape functions n i with
y
I
The side shape functions S 2 , S 3 are obtained similarly with p 1 replaced with
suitable coordinate transformation. Note that internal shape functions vanish on @ ^
K and that
edge shape functions vanish on two edges.
2. step: Shape functions as dened on the reference element ^
K are now assembled to yield a
basis of S
First, the standard piecewise linear hat functions are obtained by simply
assembling the shape functions V of each element. The internal shape functions are simply
taken as
else
I is the internal shape function on the reference element ^
K dened above. It
remains to assemble the side shape functions. To that end, we associate with each edge e a
polynomial degree p e := min fp K j e is edge of Kg. Let e be an edge shared by two elements
K 0 . For simplicity of notation assume that the element maps FK , FK 0 are such that FK( 1
and that additionally FK (x; (we refer to [2, 7] for
details on how to treat the general case). We then dene p e 1 edge shape functions ' i;e
associated with edge e by setting
else
An analogous formula holds for edges e with e @
Once a basis of
chosen, the hp-FEM (2.7) can be formulated as seeking the
solution U of a system of linear equations
We mention in passing that computing the stiness matrix A and the load vector F to su-cient
accuracy can be accomplished with work O(dimVN ), [8].
2.5 Sparse Factorization of the Schur complement
We discuss how the Cholesky factorization of the stiness matrix A leads to the explicit sparse
representation of discrete Poincare-Steklov operators. Let T be the Poincare{Steklov operator
(Dirichlet-Neumann map)
where
1 u is the co-normal derivative of the solution u to the equation
condition
and the trace space YN :=
VN , the discrete
approximation
N is dened as follows: For 2 YN , the approximation TN 2 Y 0
is given by
where ev 2 VN is an arbitrary extension of v satisfying
An analysis of the error T TN was presented in [8]:
Theorem 2.9
Let
be a polygon. Then the following two statements hold:
1. There exists - 0 > 1=2 such that the Poincare-Steklov operator T maps continuously from
2. Under the hypotheses of Theorem 2.4 (with - 2 (0; 1) as in the statement of Theorem 2.4)
there holds for arbitrary - 2 [0; -] \ [0; - 0 )
1=2;@
If a standard basis in the sense of Denition 2.7 of the space S
chosen, then the shape
functions can be split into \interior" and \boundary" shape functions. A shape functions is
said to be \interior" if its node (see Denition 2.7) lies in
it is a \boundary" shape function
if its node lies on @ To this partitioning of basis functions corresponds a block partitioning of
the stiness matrix AN 2 R dim VN dimVN for the unconstrained space VN of the following form:
A A I
A I A II
The subscript I indicates the interior shape functions and marks the boundary shape func-
tions. If we choose Y 0
then the matrix representation of the operator TN is given by the
Schur complement
TN := A A I A 1
II A I :
Inserting the Cholesky factorization LL leads to the desired direct FE method for the
Because the Cholesky factor L can be computed with linear-logarithmic complexity (see Section
provides an e-cient representation of the Poincare-Steklov operator. We
nally mention that our factorization scheme for the Schur complement carries over verbatim
to the case of the Neumann-Dirichlet map and also to the case of mixed boundary conditions.
3 Cholesky factorization of the stiness matrix
This section is devoted to the main result of the paper, the development of an e-cient Cholesky
factorization scheme for the stiness matrix arising in the 2D-boundary concentrated FEM.
The key issue is the appropriate numbering of the degrees of freedom. First, we illustrate this
numbering scheme for the case of geometric meshes and constant polynomial degree
(Sections 3.2, 3.3). We start with this simpler case because our numbering scheme for the
degrees of freedom (Algorithm 3.10) is based on a binary space partitioning; in the case
the degrees of freedom can immediately be associated with points in space, namely, the vertices
of the mesh. The general case of linear degree vectors is considered in Section 3.4.
For the case we recall that by Denition 2.7, the vertices of the mesh are called nodes.
We denote by V the set of nodes and say that a node V 0 is a neighbor of a node V if there exists
an element K 2 T such that V and V 0 are vertices of K. It will prove useful to introduce the
set of neighbors of a node V 2 V as
is a neighbor of V
because the sparsity pattern of the stiness matrix can be characterized with the aid of the
sets N .
3.1 Nested dissection in Direct Solvers
Let A 2 R NN be a symmetric positive denite matrix. We denote by L its Cholesky factor,
i.e., the lower triangular matrix L with LL A. For sparse, symmetric positive matrices
A 2 R NN it is customary, [5], to introduce the i-th bandwidth i and the j-th frontwidth ! j of
A as
It can be shown (see Proposition 3.1(i)) that in each row i, only the entries L ij with i i j i
are non-zero. The j-th frontwidth ! j measures the number of non-zero subdiagonal entries in
column j of L, i.e.,
The frontwidth ! is given by
The cost of computing the Cholesky factor L can then be quantied in terms of the numbers
Proposition 3.1 Let A 2 R
NN be symmetric, positive denite. Then
(i) the storage requirement for L is
(ii) the number of
oating point operations to compute L is
N square roots for the diagonal entries L ii ,
1P N 1
multiplications.
In particular, the storage requirement nnz and the number of
oating point operations W can
be bounded by
Proof. See, for example, [5, Chapters 2, 4].
In view of the estimate (3.5), various algorithms have been devised to number the unknowns so
as to minimize the frontwidth !; the best-known examples include the Reverse Cuthill-McKee
algorithm, [4], the algorithm of Gibbs-Poole-Stockmeyer, [6], and nested dissection. For stiness
matrices arising in the 2D-boundary concentrated FEM, we will present an algorithm based
on nested dissection in Section 3.3 that numbers the nodes such that
The basic nested dissection algorithm in FEM reads as follows:
Algorithm 3.2 (nested dissection)
nested dissection(V; N 0 )
input: Set of nodes V, starting number N 0
output: numbering of the nodes V starting with N 0
label the element of V with the number N 0
else f
1. partition the nodes V into three mutually disjoint sets V left , V right , V bdy such that
(a) jV left j jV right j
(b) jV bdy j is \small"
(c) right [V bdy for all right
2. if V left nested dissection(V left
3. if V right nested dissection(V right
4. enumerate the elements of V bdy starting with the number
return
The key property is (1c). It ensures that the stiness matrix A has the following block structure:
A =@ A lef t;left 0 A >
bdy;left
bdy;right
A bdy;left A bdy;right A bdy;bdyA (3.6)
sparsity pattern of L
Figure
1: Left: mesh and initial geometric partitioning (thick line). Right: sparsity pattern of
Condition (1b) is imposed in view of the fact that the frontwidth !(A) of A can be bounded
by
where are the frontwidths of the submatrices A left , A right . Thus, if
the recursion guarantees that jV bdy j is small and the frontwidths of these submatrices are small,
then the numbering scheme is e-cient. Since nested dissection operates recursively on the
sets V left , V right , its eectiveness hinges on the availability of partitioning strategies for Step 1
of Algorithm 3.2 that yield small jV bdy j. The particular node distributions appearing in the
boundary concentrated FEM will allow us to devise such a scheme in Section 3.3.
3.2 Nested dissection: an Example
We show that for meshes that arise in the boundary concentrated FEM, it is possible to perform
the partitioning of Step 1 in Algorithm 3.2 such that the set V bdy is very small compared to V.
We illustrate this in the following example.
Example 3.3 Consider meshes that are rened toward a single edge as shown in Fig. 1. The
thick line partitions R 2 into two half-spaces H< , H> , and the nodes are partitioned as follows:
has a neighbor in H> g [ has a neighbor in H<
Note that due to choosing the partitioning line as the center line, we have
(Estimates of this type are rigorously established in Lemma 3.4 below.) We then proceed as
in Algorithm 3.2 by partitioning along a \center line" of the subsets of nodes (a more rigorous
realization of this procedure is Algorithm 3.9 below that is based on binary space partitioning).
We note that the subsets V left , V right have a similar structure as the original set V; thus, they
are partitioned satisfying an estimate analogous to (3.8). Using this partitioning scheme in
Algorithm 3.2 leads to very small frontwidths: For a mesh with nodes, a frontwidth
obtained (see Fig. 1 for the actual sparsity pattern). We analyze this example in
more detail in Examples 4.1, 4.2 below.
The bounds in (3.8) are \geometrically clear." A more rigorous proof is established in the
following lemma.
Lemma 3.4 Let T be a geometric mesh with boundary mesh size h on a
domain
. Fix b 2
@
and choose a partitioning vector t 6= 0 such that the following cone condition is satised (cf.
Fig.
ni > -jx bj j~njg \ B (b)
Dene the half-spaces
and set
is a node of T and
has a neighbor in H> g [ has a neighbor in H<
Then
log jV j;
where
depends only on -, , and the constants describing the geometric mesh T . In
particular,
is independent of the point b.
Proof. Let l be the line passing through the point b with direction ~ n,
0g. Next, dene the function
dist
The key property of d is that
Denoting by K bdy the set of all elements that intersect the line l, K bdy
we can bound
K2K bdy
Z
Z
In order to proceed, we need two assertions:
Assertion 1: For - 2 (0; 1) given by the cone condition (3.9) there holds
dist (x; @ dist (x; b) p
dist (x; @ 8x\ l: (3.12)
PSfrag replacements
@
H<
H>
Figure
2: Notation for partitioning at a boundary point b.
The rst estimate of (3.12) is obvious. For the second one, geometric considerations show for
x\ l
dist
dist
dist
where C 0 is the lateral part of @C t . This proves (3.12).
Assertion 2: There exists C > 0 depending only on the parameters describing the geometric
mesh and the parameters of the cone condition such that
Again, the rst bound is obvious. For the second bound, let x 2 K for some K 2 K bdy and
choose xK 2 K \ l. Then
dist dist
dist
where we used (3.12). Next, we use the properties of geometric meshes and (3.10) to get
dist (x; b)g max fh; hK g +p
dist
Inserting this bound in (3.11) gives
Z
dx C
Z
r dr Cj log hj:
Since jV j h 1 (cf. [8, Prop. 2.7]), we have proved the rst estimate of the lemma.
For the second estimate of the lemma, we note that the boundary parts < :=
@
\H< \B (b)
and >
@
\H> \B (b) have positive lengths. Thus we have jV \ < j h 1 and jV \ > j h 1
and a fortiori jV left j jV j jV right j; the proof of the lemma is now complete.
The reason for the eectiveness of the partitioning strategy in Example 3.3 is that at each stage
of the recursion, Algorithm 3.2 splits the set of nodes into two sets of (roughly) equal size, and
a set of boundary nodes V bdy that is very small. The property (3.8), proved in Lemma 3.4,
motivates the following denition:
Denition 3.5 ((
q)-balanced partitioning) The nested dissection Algorithm 3.2 is said
to be (
q)-balanced for a set V if at each stage i of the recursion there holdsjV (i)
right j
left
Here, the superscripts i indicate the level of the recursion.
For a (
q)-balanced nested dissection algorithm, we can then show that the frontwidth grows
only moderately with the problem size:
Proposition 3.6 If the nested dissection Algorithm 3.2 is (
q)-balanced for V, then the numbering
generated by Algorithm 3.2 leads to a frontwidth !(A) of the stiness matrix A with
Proof. The assumption that the algorithm is (
q)-balanced implies easily
right j
Thus, jV (i) j
jVj, and the depth of the recursion is at most
since the recursion stops if jV (i) 1. The bound (3.7) then implies
where we used the denition of C
of the statement of the proposition.
In Example 3.3, we studied the model situation of meshes rened toward a straight edge.
In view of Lemma 3.4, we expect the partitioning strategy of be (
1)-balanced. Hence, we
expect the frontwidth to be of the order O(log 2 jVj). In Examples 4.1, 4.2, we will conrm this
numerically.
3.3 Node Numbering for geometric meshes: the case
We now present a partitioning strategy that allows Algorithm 3.2 to be (
1)-balanced for node
sets that arise in the boundary concentrated FEM. The partitioning rests on the binary space
partitioning (BSP), [3], which is reproduced here for convenience's sake:
Algorithm 3.7 (BSP)
input: Set of points X , partitioning vector t
output: partitioning of X into X< , X> , X= with jX < j jX > j and
1. determine the median m of the set fhx; ti
2. X< := fx
return
Remark 3.8 Since the median of a set can be determined in optimal (i.e., linear in the number
of elements) complexity (see, e.g., [1, 9]), Algorithm 3.7 can be realized in optimal complexity.
The next algorithm formalizes our procedure of the example in Section 3.2.
Algorithm 3.9 (subdomain numbering)
nodes V, vector t, starting number N 0
%output: numbering of nodes V
label the element of V with the number N 0
else f
1.
2.
right
3. if V left
4. if V right
5. enumerate the elements of V bdy starting with the number
return
Algorithm 3.9 allows us to number e-ciently nodes of a mesh that is rened toward a line as
in Fig. 1. Our nal algorithm splits the domain into subdomains, each of which can be treated
e-ciently by Algorithm 3.9.
Algorithm 3.10 (node numbering)
1. split the
domain
into
subdomains
choose vectors t i
2.
3.
4. N := 1
5. for do f
call numbering
6. number the nodes of V bdy starting at N
The
subdomains
i and the partitioning vectors t i should be chosen such that
(a) jV bdy
(b) the partitioning in the subsequent calls of
1)-balanced in the
sense of Denition 3.5.
To obtain guidelines for the selection of
subdomains
i and partitioning vectors t i , it is valuable
to study examples where Condition (a) or Condition (b) are not satised. This is the purpose
of the next example.
Example 3.11 The left and center pictures in Fig. 3 illustrate situations in which Conditions
(a), (b) are violated: In the left picture of Fig. 3, the common boundary
@
@
j is
tangential to
@
at b, and thus we cannot expect jV bdy (cf. the cone condition
(3.9)). In the center picture of Fig. 3, the partitioning vector t i is parallel to the outer normal
vector n(x) at the boundary point x 2 @ This prevents the partitioning from being
1)-balanced, since at some stage of the recursion, Condition (3.15) will be violated (note
again the cone condition (3.9)). We refer to Example 4.5 below, where this kind of failure is
demonstrated numerically. Finally, we point out that in the center picture of Fig. 3 the vector
for the
subdomain
@
>From the two cases of failure in Example 3.11, we draw the following guidelines:
1. The subdomains should be such that
@
@
j is non-tangential to @
2. For each
subdomain
i , the partitioning vector t i should be chosen such that the cone
condition holds uniformly in b 2
@
are independent of b.
A partition chosen according to these rules is depicted in the right part of Fig. 3.
Remark 3.12 The guideline for choice t i such that the cone condition (3.9) is satised at each
point
@
\@
guarantees that in the partitioning, jV (i)
O(log jVj) at each stage i of the
recursion (see Lemma 3.4). Thus, the partitioning is (
1)-balanced if we can ensure (3.14),
that is, that V left and V right are comparable in size. Note that this could be monitored during
run time.
Remark 3.13 In all steps of the recursion in Algorithm 3.9, we use a xed partitioning vector
. This is done for simplicity of exposition. In principle, it could be chosen dierently in each
step of the recursion depending on the actual set to be partitioned. Since the partitioning
strategy should be (
1)-balanced, one could monitor this property during run time and adjust
the vector t as necessary.
We conclude this section with a work estimate for the case
PSfrag replacements
@
@
@
PSfrag
replacements
@
@
@
@
x
PSfrag
replacements
@
@
@
@
x
Figure
3: Choosing subdomains and partitioning vectors. Left and center: cases of failure.
Right: possible partitioning with arrows indicating a good choice of vector t i ; in the three
subdomains without an arrow, t i may be chosen arbitrary.
Proposition 3.14 Let V be the set of nodes corresponding to a geometric mesh on a domain
. Assume that the
subdomains
Algorithm 3.10 are chosen
such that a) jV bdy the partitioning in each call
1)-balanced. Then the frontwidth !(A) of the stiness matrix on geometric meshes with
bounded by
where the constant C is independent of jVj. The storage requirements nnz and work W for the
Cholesky factorization are bounded by
nnz CjVj log 2 jVj; W jVj log 4 jVj:
Proof. The hypothesis that Algorithm 3.10 is based on a (
1)-balanced partitioning together
with Proposition 3.6 implies jVj). The estimates concerning storage requirement
and work then follow from Proposition 3.1.
Remark 3.15 On each level the nodes of V (i)
bdy are numbered arbitrarily. Suitable numbering
strategies of these sets could further improve the frontwidth !(A).
3.4 Node Numbering: geometric meshes and linear degree vectors
We now consider the case of geometric meshes and linear degree vectors. We proceed as in
Section 3.3 for the case identifying degrees of freedom with points in space. We use
the notion of nodes introduced in Denition 2.7 and denote by V the set of all nodes. We count
nodes according to their multiplicity, that is, the number of shape functions corresponding to
that node. This procedure is justied by the fact that shape functions associated with the same
node have the same support and therefore the same neighbors. As in the case
that node V 0 is the neighbor of a node V , if the intersection of the supports of the associated
shape functions has positive measure. The set of neighbors of a node is dened as in (3.1).
Remark 3.16 Nodes are counted according to their multiplicity. If a one-to-one correspondence
between points in space and degrees of freedom is desired, one could choose distinct
nodes on an edge (e.g., uniformly distributed) to be assigned to the shape functions associated
with that edge; likewise distinct nodes in an element could be selected to be assigned to shape
functions associated with that element. The performance of the algorithms below will be very
similar.
To this set of nodes and this notion of neighbors, we can apply Algorithm 3.10. In order to
estimate the resulting frontwidth, we need the analog of Lemma 3.4.
Lemma 3.17 Let T be a geometric mesh on a
domain
, p be a linear degree vector, and
assume the cone condition (3.9). Dene the half-spaces
and set
is a node and
has a neighbor in H> g [ has a neighbor in H<
Then
log 3 jV j;
where
depends only on -, , and the constants describing the geometric mesh T and the
linear degree vector p.
Proof. The proof of this lemma is very similar to that of Lemma 3.4. For the bound on jV bdy j
we have to estimate (using the notation of the proof of Lemma 3.4)
K2K bdy
The desired bound then follows as in the proof of Lemma 3.4 if we observe that
In view of the appearance of the exponent 3 in Lemma 3.17, we expect Algorithm 3.10 to be
3)-balanced. In this case, we can obtain the following result for the performance of the
numbering obtained by Algorithm 3.10:
Theorem 3.18 Let T be a geometric mesh and p be a linear degree vector. Set
.
Assume that
subdomains
i and partitioning vectors t i , Algorithm 3.10 are
chosen such that a) jV bdy the partitioning in each call
is (
3)-balanced. Then the frontwidth !(A) of the stiness matrix is bounded by
The storage requirements nnz and work W for the Cholesky factorization are bounded by
nnz CN log 4 N; W N log 8 N:
Remark 3.19 In view of Remark 3.8, Algorithm 3.10 requires O(N log N) work (i.e., optimal
complexity) to compute the numbering.
Remark 3.20 We assumed that the mesh consists of triangles only. However, Algorithm 3.10
can be applied to meshes containing quadrilaterals and curved elements. Theorem 3.18 holds
verbatim in these cases as well.
frontwidth for refinement towards a single edge
log(N)
frontwidth
frontwidth, t=(1,1)
log 2 (N)
Figure
4: Examples 4.1, 4.2: in
uence of partitioning vector in BSP on frontwidth.
In this section, we conrm that the numbering obtained by Algorithm 3.10 allows for computing
the Cholesky factorization with O(N log q N ), q 2 f4; 8g, work. We restrict ourselves to the case
that is, we illustrate Proposition 3.14. In all examples, the nodes on the
boundary correspond to unknowns, i.e., we consider Neumann problems. In all examples, we
use Algorithms 3.10, 3.9 to obtain the numbering of the nodes.
In our computational experiments, the meshes are generated with the code Triangle of
J.R. Shewchuck, [11]. Triangle is a realization of Ruppert's Algorithm, [10], which creates
triangulations with a guaranteed minimum angle of 20 - . Our reason for working with
this particular triangulation algorithm is that it automatically produces meshes
the desired property diamK dist (K; @
if its input consists of quasi-uniformly distributed
boundary nodes only (the meshes in Figs. 5, 6, for example, are obtained with Triangle from
200 uniformly distributed points on the boundary).
In all tables and gures, N stands for the number of nodes of the mesh generated by Triangle,
! is the frontwidth of the stiness matrix, and nnz the storage requirement for the Cholesky
ops is the number of multiplications, and t chol the CPU-time required to perform the
We implemented the Cholesky factorization in \inner product form,"
where L is computed columnwise and the sparsity pattern of L is exploited.
The basic building block of our procedure is Algorithm 3.9. Our rst example, therefore,
analyzes in detail the situation already discussed in Example 3.3.
Example 4.1 Let be the unit square. For n 2 N the initial input for Triangle
are the points f(i=n; ng [ f(1; 0:3); (1; 1); (0:7; 1); (0; 1); (0; 0:7)g (see Fig. 4 for
Triangle's output for 50). The node numbering is then obtained by applying Algorithm
3.9 with the partitioning vector . The results are collected in Table 1 and
Fig. 4. In view of Proposition 3.6 we expect the frontwidth ! to be O(log 2 N ). The results of
Table
are plotted in Fig. 4, and the observed growth is indeed very close to O(log 2 N ). In
ops t chol [sec] ! nnz
Table
1: Examples 4.1, 4.2: points on edge,
view of Proposition 3.1,
ops t chol
these estimates.
Example 4.2 The choice in Example 4.1 of the partitioning vector
well-suited to the case of renement toward a straight edge. In view of Lemma 3.4, we expect
Algorithm 3.9 to be still (
1)-balanced for partitioning vectors t that are not parallel to the
normal vector. In order to illustrate that \non-optimal" choices of partitioning vectors t still
lead to (
1)-balanced nested dissection, we consider the same meshes as in Example 4.1 but
employ Algorithm 3.9 with the vector . The results are presented in the right part of
Table
1 and in Fig. 4. We observe that this choice of partitioning strategy leads to very similar
results as in Example 4.1, showing robustness of our algorithm with respect to the choice of
partitioning vector t.
Example 4.3 In this example, the domain is the unit
square
input are n uniformly distributed nodes on the boundary
@
(see Fig. 5 for Triangle's output
200). The node numbering is achieved with Algorithm 3.10 for
subdomains
corresponding vectors t i given
by:
The numerical results are collected in Table 2. We expect the frontwidth to be O(log 2 N ),
which is visible in Fig. 5.
refinement towards bdy of square and clover leaf
log(N)
frontwidth
square
clover leaf
log 2 (N)
Figure
5: Examples 4.3, 4.4: meshes with 200 points on boundary (left) and frontwidth vs.
log N (right).
Example 4.4 In this example, we replace the square of Example 4.3 with
(See Fig. 5 for Triangle's output for boundary points.) The partitioning into four
subdomains and the choice of the partitioning vectors t i is given by (4.1). The numerical results
are collected in Table 3. The expected relation visible in Fig. 5.
Example 4.5 The key feature of the choice of the partitioning vectors t i in Examples 4.3, 4.4
is that, for each i,
sup
x 2
@
@
is the normal at a boundary point x). (4.2)
This condition was identied in Example 3.11 as necessary for the binary space partitioning
strategy with (xed) vector t i to be (
1)-balanced. In this last example, we illustrate that
(4.2) is indeed necessary. To that end, we replace the square of Example 4.3 with
The
subdomains
i and the partitioning vectors t i are chosen as in Example 4.3 and given by
(4.1). A calculation shows that condition (4.2) is satised for c 2 (0; 2=3) and is violated for
subdomains,
4 in Fig. 6). For c close to 2=3 we therefore expect our binary
space partitioning to perform poorly in the sense that the sets V (i)
bdy become large, thus resulting
in large frontwidths. This is illustrated in Table 4, where the frontwidths for dierent values
of c are shown in dependence on n, the number of points on the boundary. Fig. 6 shows that
in the limit 2=3, the frontwidth ! does not grow polylogarithmically in N . In Table 4,
we reported the number N of nodes for the case however, that the
meshes produced by Triangle for the three cases are very similar.
Acknowledgments
: The authors would like to thank Profs. W. Hackbusch and L.N. Trefethen
for valuable comments on the paper.
ops t chol [sec]
Table
2: Example 4.3:
ops t chol [sec]
Table
3: Example 4.4:
-0.4
c=2/3
c=0.3
effect of subdomain choice on frontwidth; domain: dumbbell
log(N)
frontwidth
c=2/3
log 2 (N)
Figure
Example 4.5: mesh with 200 points on boundary (top left), the four subdomains for
(bottom left) and frontwidth vs. log N (right).
1024 2022 107 118 118
2048 4079 128 159 155
131072 264600 370 577 768
262144 529340 423 723 1043
Table
4: Example 4.5:
--R
Time bounds for selection.
A general and exible fortran 90 hp-FE code
On visible surface generation by a priori tree structures.
Computer implementation of the
Computer solution of large sparse positive de
An algorithm for reducing the bandwidth and pro
Element Methods for CFD.
Boundary concentrated
The Art of Computer Programming
A Delaunay re
2d mesh generator
--TR
A Delaunay refinement algorithm for quality 2-dimensional mesh generation
The art of computer programming, volume 3
Computer Solution of Large Sparse Positive Definite
Finite Element Method for Elliptic Problems
Computer implementation of the finite element method | hp-finite element methods;direct solvers;schur complement;meshes refined towards boundary |
636260 | Computation of the simplest normal forms with perturbation parameters based on Lie transform and rescaling. | Normal form theory is one of the most power tools for the study of nonlinear differential equations, in particular, for stability and bifurcation analysis. Recently, many researchers have paid attention to further reduction of conventional normal forms (CNF) to so called the simplest normal form (SNF). However, the computation of normal forms has been restricted to systems which do not contain perturbation parameters (unfolding). The computation of the SNF is more involved than that of CNFs, and the computation of the SNF with unfolding is even more complicated than the SNF without unfolding. Although some author mentioned further reduction of the SNF, no results have been reported on the exact computation of the SNF of systems with perturbation parameters. This paper presents an efficient method for computing the SNF of differential equations with perturbation parameters. Unlike CNF theory which uses an independent nonlinear transformation at each order, this approach uses a consistent nonlinear transformation through all order computations. The particular advantage of the method is able to provide an efficient recursive formula which can be used to obtain the nth-order equations containing the nth-order terms only. This greatly saves computational time and computer memory. The recursive formulations have been implemented on computer systems using Maple. As an illustrative example, the SNF for single zero singularity is considered using the new approach. | Introduction
Normal form theory for dierential equations can be traced back to the original
work of one hundred years ago, and most credit should be given to Poincare
[1]. The theory plays an important role in the study of dierential equations
Preprint submitted to Elsevier Preprint 25 May 2001
related to complex behavior patterns such as bifurcation and instability [2{4].
The basic idea of normal form theory is employing successive, near-identity
nonlinear transformations to obtain a simple form. The simple form is qualitatively
equivalent to the original system in the vicinity of a xed point,
and thus greatly simplify the dynamical analysis. However, it has been found
that conventional normal forms (CNF) are not the simplest form which can
be obtained, and may be further simplied using a similar near-identity non-linear
transformation. (e.g., see [5{13]). Roughly speaking, CNF theory uses
the kth-order nonlinear transformation to possibly remove the kth-order non-linear
terms of the system, while in the computation of the simplest normal
form (SNF) the terms in the kth-order nonlinear transformation are not only
used to simplify the kth-order terms of the system, but also used to eliminate
higher order nonlinear terms. Since the computation for the SNF is
much more complicated than that of CNFs, computer algebra systems such
as Maple, Mathematics, Reduce, etc. have been used (e.g., see [12,14{18]).
Recently, researchers have paid particular attention to the development of efcient
computation methodology for computing the SNF [18].
The computation of normal forms has been restricted to systems which do not
contain perturbation parameters (unfolding). However, in general a physical
system or an engineering problem always involves some system parameters,
usually called perturbation parameter or unfolding. In practice, nding such
a normal form is more important and applicable. There are two ways for nd-
ing such a normal form. One way is to extend the dimension of a system by
including the dimension of the parameter and then apply normal form theory
to the extended system. The other way employs normal form theory directly
to the original system. The former may be convenient for proving theorems
while the later is more suitable for the computation of normal forms, which is
particularly useful when calculating an explicit normal form for a given sys-
tem. However, in most cases of computing such CNFs with unfolding, people
are usually interested in the normal form only. Thus one may rst ignore the
perturbation parameter and compute the normal form for the corresponding
\reduced" system (by setting the parameters zero), and then add unfolding
to the resulting normal form. In other words, the normal form of the original
system with parameters is equal to the normal form of the \reduced" system
plus the unfolding. This way it greatly reduces the computation eort, with
the cost that it does not provide the nonlinear transformation between the
original system and the normal form. This \simplied" approach is based on
the fact that the normal form terms (besides the unfolding) for the original
system (with perturbation parameters) are exactly same as that of the \re-
duced" system, implying that all higher order nonlinear terms consisting of
the parameters can be eliminated by nonlinear transformations.
The computation of the SNF is more involved than that of CNFs, and the
computation of the SNF with unfolding is even more complicated than the
SNF without unfolding. Although some authors mentioned further reduction
of the SNF, no results have been reported on the exact computation of the
SNF of systems with perturbation parameters. One might suggest that we
may follow the \simplied" way used for computing CNFs. That is, we rst
nd the SNF for the \reduced" system via a near-identity nonlinear transfor-
mation, and then add an unfolding to the SNF. However, it can be shown that
this \simplied" way is no longer applicable for computing the SNF of systems
with perturbation parameters. In other words, in general it's not possible to
use only near-identity transformations to remove all higher order terms which
involve the perturbation parameters. In this paper, we propose, in addition to
the near-identity transformation, to incorporate the rescaling on time to form
a systematic procedure. The particular advantage of the method is to provide
an ecient recursive formula which can be used to obtain the nth-order equations
containing the nth-order terms only. This greatly saves computational
time and computer memory. The recursive formulation can be easily implemented
using a computer algebra system such as Maple. Moreover, unlike CNF
theory which uses an independent nonlinear transformation at each order, this
approach uses a consistent nonlinear transformation through all order compu-
tations. This provides a one step transformation between the original system
and the nal SNF, without the need of combining the multiple step nonlinear
transformations at the end of computation.
In the next section, the ecient computation method is presented and the general
explicit recursive formula is derived. Section 3 applies the new approach
to derive the SNF for single zero singularity, and conclusions are drawn in
Section 4.
2 Computation of the SNF using Lie transform
Consider the general nonlinear dierential equation, described by
dx
dt
where x and are the n-dimensional state variable and m-dimensional
parameter variable, respectively. It is assumed that is an equilibrium
of the system for any values of , i.e., f(0; ) 0. Further, we assume that
the nonlinear function f(x; ) is analytic with respect to x and , and thus
we may expand equation (1) as
dx
dt
where Lx 4
represents the linear part and L is the Jacobian matrix,
Dxf , evaluated on the equilibrium at the critical point It is
assumed that all eigenvalues of L have zero real parts and, without loss of
generality, is given in Jordan canonical form. (Usually J is used to indicate
Jacobian matrix. Here we use L in order to be consistent with the Lie bracket
denotes the kth-degree vector homogeneous polynomials
of x and .
To show the basic idea of normal form theory, we rst discuss the case when
system (1) does not involve the perturbation parameter, , as normal forms
are formulated in most cases. In such a case f k (x; ) is reduced to f k (x).
The basic idea of normal form theory is to nd a near-identity nonlinear
transformation
such that the resulting system
dy
dt
becomes as simple as possible. Here h k (y) and g k (y) denote the kth-degree
vector homogeneous polynomials of y.
According to Takens' normal form theory [19], we may rst dene an operator
as follows:
denotes a linear vector space consisting of the kth-degree homogeneous
vector polynomials. The operator called Lie bracket, dened
by
Next, dene the space R k as the range of L k , and the complementary space
of R k as K
and we can then choose bases for R k and K k . Consequently, a vector homogeneous
polynomial f k (x) 2 H k can be split into two parts: one of them can
be spanned by the basis of R k and the other by that of K k . Normal form
theory shows that the part belonging to R k can be eliminated while the part
belonging to K k must be retained in the normal form.
It is easy to apply normal form theory to nd the \form" of the CNF given by
equation (4). In fact, the coecients of the nonlinear transformation h k (y)
being determined correspond to the terms belonging to space R k . The \form"
of the normal form g k (y) depends upon the basis of the complementary space
which is induced by the linear vector v 1 . We may apply the matrix method
[2] to nd the basis for space R k and then determine the basis of the complementary
space K k .
Since the main attention of this paper is focused on nding further reduction
of CNFs and computing the explicit expressions of the SNF and the nonlinear
transformation, we must nd the \form" of g k (y). Similar to nding CNFs,
the SNFs have been obtained using near-identity nonlinear transformations.
It should be mentioned that some author has also discussed the use of \rescal-
ing" to obtain a further reduction (e.g., see [14,15]). However, no results have
been reported on the study of the SNF of system (1). In this paper, we present
a method to explicitly compute the SNF of system (1). The key idea is still
same as that of CNFs: nding appropriate nonlinear transformations so that
the resulting normal form is the simplest. The simplest here means that the
terms retained in the SNF is the minimum up to any order.
The fundamental dierence between the CNF and the SNF is explained as fol-
lows: Finding the coecients of the nonlinear transformation and normal form
requires for solving a set of linear algebraic equations at each order. Since in
general the number of the coecients is larger than that of the algebraic equa-
tions, some coecients of the nonlinear transformation cannot be determined.
In CNF theory, the undetermined coecients are set zero and therefore the
nonlinear transformation is simplied. However, in order to further simplify
the normal form, one should not set the undetermined coecients zero but let
them carry over to higher order equations and hope that they can be used to
simplify higher order normal form terms. This is the key idea of the SNF com-
putation. It has been shown that computing the SNF of a \reduced" system
(without perturbation parameters) is much complicated than that for CNFs.
Therefore, it is expected that computing the SNF of system (1) is even more
involved than nding the SNF of the \reduced" system. Without a computer
algebra system, it is impossible to compute the SNF. Even with the aid of
symbolic computation, one may not be able to go too far if the computational
method is not ecient.
To start with, we extend the near-identity transformation (3) to include the
parameters, given by
and then add the rescaling on time:
Further we need to determine the \form" of the normal form of system (1).
In generic case, we may use the basis of K k (see equation (7)) to construct
k (y), which is assumed to be the same as that for the CNF of system (1),
plus the unfolding given in the general form, L 1 () y, so that equation (4)
becomes
dy
d
is an nn matrix linear function of , to be determined in the
process of computation, representing the unfolding of the system.
Now dierentiating equation (8) with respect to t and then applying equations
(2), and (8){(10) yields a set of algebraic equations at each order for
solving the coecients of the SNF and the nonlinear transformation. A further
reduction from a CNF to the SNF is to nd appropriate h k (y; )'s such
that some coecients of g k (y)'s can be eliminated.
When one applies normal form theory (e.g., Takens normal form theory) to
a system, one can easily nd the \form" of the normal form (i.e., the basis
of the complementary space K k ), but not the explicit expressions. However,
in practical applications, the solutions for the normal form and the nonlinear
transformation are both important and need to be found explicitly. To do this,
one may assume a general form of the nonlinear transformation and substitute
it back to the original dierential equation, with the aid of normal form theory,
to obtain the kth-order equations by balancing the coecients of the homogeneous
polynomial terms. These algebraic equations are then used to determine
the coecients of the normal form and the nonlinear transformation. Thus,
the key step in the computation of the kth-order normal form is to nd the
kth-order algebraic equations, which takes the most of the computation time
and computer memory. The solution procedures given in most of normal form
computation methods contain all lower order and many higher order terms in
the kth-order equations, which extremely increases the memory requirement
and the computation time. Therefore, from the computation point of view,
the crucial step is rst to derive the kth-order algebraic equations which only
contain the kth-order nonlinear terms.
The following theorem summarizes the results for the new recursive and computationally
ecient approach, which can be used to compute the kth-order
normal form and the associated nonlinear transformation.
Theorem 1: The recursive formula for computing the coecients of the simplest
normal form and the nonlinear transformation is give by
Tm
h l i
are all jth-degree vector homogeneous
polynomials in their arguments, and T j (y; ) is a scalar function of y and
. The variables y and have been dropped for simplicity. The notation
h l i
denotes the ith order terms of the Taylor expansion of
y. More precisely,
where each dierential operator D aects only function f j , not h l m (i.e., h l m
is treated as a constant vector in the process of the dierentiation), and thus
j. Note that at each level of the dierentiation, the D operator is actually
a Frechet derivative to yield a matrix, which is multiplied with a vector to
generate another vector, and then to another level of Frechet derivative, and
so on.
Proof: First dierentiating equation (8) results in
dx
dt
dy
dt
dy
dt
dy
d
d
dt
then substituting equations (2), (9) and (10) into equation (13) yields
Note that the T 0 can be used for normalizing the leading non-zero nonlinear
coecient of the SNF. Since this normalization may change stability analysis if
time is reversed, we prefer to leave the leading non-zero coecient unchanged
and thus set T
Next substituting equation (8) into equation (14) and re-arranging gives
Then we use Taylor expansions of f near to rewrite equation (15) as
Further, applying equations (8) and (9) yields
representing the linear part of the system, and the Lie operator
with respect to y has been used. Note in equation (17) that the expansion
of f and h do not have the purely parameter terms since f(0;
for any values of . Finally comparing the same order terms in equation (17)
results in
. (18)
y, and the variables in g i
have been dropped for simplicity. For a general k, one can obtain the recursive
given in equation (11), and thus the proof is completed.
It has been observed from equations (11) and (18) that
(i) If system (1) does not have the parameter , then
thus equation (17) is reduced
which has been obtained in [18] for the \reduced" system. This indicates
that computing the SNF with unfolding needs much more computation
eort than that for computing the SNF without unfolding.
(ii) The only operation involved in the formula is the Frechet derivative, involved
in Dh i , D i f j and the Lie bracket [; ]. This operation can be
easily implemented using a computer algebra system.
(iii) The kth-order equation contains all the kth-order and only the kth-order
terms. The equation is given in a recursive form.
(iv) The kth-order equation depends upon the known vector homogeneous
polynomials
, and on the results for
obtained from
the lower order equations.
(v) The equation involves coecients of the nonlinear transformation h,
rescaling T and the coecients of the kth-order normal form g k . If
the jth-order (j < coecients of h j and T j are completely determined
from the jth-order equation, then the kth-order equation only
involves the unknown coecients of h k ; T k and g k , which yields a CNF.
(vi) If the kth-order equation contains lower order coecients of h
(j < which have not been determined in the lower order (<
tions, they may be used to eliminate more coecients of g k , and thus
the CNF can be further simplied.
(vii) In most of the approaches for computing the SNFs without unfolding
(e.g., see [7,8,11{17]), the nonlinear vector eld f is assumed to be in a
CNF for the purpose of simplifying symbolic computations. Since for such
an approach, the kth-order equations usually include all the lower order
terms as well as many higher order terms, it is extremely time consuming
for symbolic computation and takes too much computer memory. For the
approach proposed in this paper, the kth-order equation exactly only
contains the kth-order terms, which greatly saves computer memory and
computational time. This is particularly useful for computing the SNF
with unfolding. Therefore, for our approach, the vector eld f(x; ) can
be assumed as a general analytic function, not necessary a CNF form.
Now we can use equation (18) to explain the idea of SNF. Consider the rst
equation of (18) for L 2 and g 2
, we can split the right-hand side into two
parts: one contains y only while the other involves both y and . The part
containing only y can determine g 2 . That is, the part from f 2 that cannot
be eliminated by h 2 and T 1 is the solution for
. Similarly, the other part
containing both y and can be used to nd the unfolding L 1 (). However,
it can be seen that some coecients of h 2 and T 1 are not used at this order.
Setting these \unnecessary" coecients zero results in the next equation of
in the same situation: it only requires to use h 3 and T 2 to remove
terms from f 3 as much as possible, since other terms h
been solved from the second order equations. This procedure can be continued
to any higher order equations. This is exactly CNF theory. However, when we
solve for g 2 and L 2 let the \unnecessary" coecients of h 2 and T 1 be carried
over to next step equation, then it is clear to see from the second equation
of (18) that four terms may contain these \unnecessary" coecients. These
\unnecessary" coecients can be used to possibly further eliminate a portion
or the whole part of f 3 which cannot be eliminated by the CNF approach. If
these \unnecessary" coecients are not used at this step, they can be carried
over further to higher order equations and may be used to remove some higher
order normal form terms.
3 The SNF for the single zero singularity
In this section, we shall apply the results and formulas obtained in the previous
section as well as the Maple program developed on the basis of recursive
formula (11) to compute the SNF of single zero singularity. We use this simple
example to demonstrate the solution procedure in nding the explicit SNF of
a general system. For the single zero singularity, the linear part, L x, becomes
zero, and we may put the general expanded system in a slightly dierent form
for convenience:
dx
dt
a 1i i x +X
a 2i i x 2 +X
a
Similarly, the near-identity nonlinear transformation and the time scaling are,
respectively, given by
and
with In order to have a comparison between system (20)
and its \reduced" system, given by (obtained by setting
(20)),
dx
dt
we list the SNF of the \reduced" system below, which were obtained only
using a near-identity transformation [17]:
(i) If a 20 6= 0, then the SNF is
dy
dt
(ii) If a then the SNF is
dy
dt
where the coecient b (2k 1)0 is given explicitly in terms of a j0 's.
This shows that the SNF of the \reduced" system (23) contains only two terms
up to any order.
To obtain the unfolding, we assume a 11 6= 0. Other cases can be similarly
discussed. Since equation (20) describes a codimension one system, we expect
that the nal SNF should have only one linear term for the unfolding, and
all higher order terms in equation (20) are eliminated except for, at most, the
terms with the coecients a 20 and a 30 . In other words, the SNF of equation
(20) is expected to have the form:
dy
d
We start from the case a 20 6= 0, and then discuss the general case.
Generic case: Suppose a 20 6= 0, and in addition a 11 6= 0. This is a generic
case. Applying the rst equation of (18) gives
which in turn results in
as expected, i.e., no second order terms can be removed. Similarly, from the
second equation of (18) one can nd the following equations:
a
a 20 a 21 a
First it is observed from equation (29) that if we only use the near-identity
transformation (21), then only the coecient h 11 can be used in equation (29),
and thus a must be retained in the normal form as expected. However, an
additional term a 12 must be retained too in the normal form. More terms (in
addition to a 11 ; a 20 and a will be found in higher order normal forms. This
can also happen in other singularities. This shows that one cannot apply the
\simplied" way to nd SNF of a system with perturbation parameters. In
other words, unlike CNFs, the SNF of system (20) is not equal to the SNF of
the \reduced" system (23), given by equation (24) or (25), plus an unfolding
(which is a 11 for this case).
Secondly, note from equation (29) that with the aid of rescaling, we can remove
g 3 which appears in the SNF of the \reduced" system. There are three
coecients three equations in (29), and thus the three
equations can be solved when It is further seen from equation (29)
must be chosen for the rst equation, indicating that the rescaling
must include the state variable in order to obtain the SNF of the system. h 11
and T 01 are used for the second and the third equations of (29), respectively.
Before continuing to the next order equation, we summarize the above results
as the following theorem.
Theorem 2: The general SNF of system (20) for the single zero singularity
cannot be obtained using only a near-identity transformation. When the rescaling
on time is applied, the SNF of the system can be even further simplied.
However, the rescaling must contain the state variable.
By repeatly applying the recursive formula (11), one can continue the above
procedure to nd the algebraic equations for the third, fourth, etc. order equations
and determine the nonlinear coecients. The recursive algorithm has
been coded using Maple and executed on a PC. The results are summarized
in
Table
1, where NT stands for Nonlinear Transformation. The table shows
the coecients which have been computed. The coecients in the rst row,
are actually the two coecients of the SNF.
Table
1. NT coecients for a 20 6= 0.
l 70 h 61 h 52 h 43 h 34 T 15 h
It is observed from Table 1 that except for the two coecients g 2 and L 1 ,
all other coecients are lined diagonally in the ascending order, according to
one of the subscripts of h ij or T ij . The Maple program has been executed
up to 10th order. But the general rule can be easily proved by the method
of mathematical induction. Note that the coecients h 2j are not presented
in
Table
1, instead the coecients T 1j are used. In fact, the coecients h 2j
do not appear in the algebraic equations and that's why the coecients T 1j
have to be introduced, which causes the state variable involved in the time
rescaling. Further it is seen that the coecients h 0j follows L 1 , the coecients
h 1j follows g 2 , and the coecients T 1j are below g 2 . After this, h
are followed. This rule will be seen again in other non-generic cases discussed
later. Each row of the table corresponds to a certain order algebraic equations
obtained from the recursive equation (11). For example, the two coecients in
the top row are used for solving the second algebraic equations, corresponding
to the coecients of y 2 and y , and thus the SNF of system (20) for a 11 6= 0
and a 20 6= 0 is given by
dy
d
up to any order. The three coecients, in the second row
are used for solving the third order algebraic equations (see equation (11))
corresponding to the coecients y 3 ; y 2 and y 2 . In other words, all the
three third order nonlinear terms in system (20) can be removed by the three
coecients, and so on. All the coecients listed in Table 1 are explicitly expressed
in terms of the original system coecients a ij 's. Therefore, the two
nonlinear transformations given by equations (21) and (22) are now explicitly
obtained.
Simple non-general case: Now suppose a
we assume a 11 6= 0. This is a non-generic case. Then the coecient of the
in the second algebraic equation is identically equal to zero due to
a the solution procedure is similar to Case 1, we omit the detailed
discussion and list the results in Table 2, where
Table
2. NT coecients for a
Thus the SNF of system (20) for this case is given by
dy
d
Note from Table 2 that the top left entry is empty due to a 22 = 0, while g 3
moves downwards by one row. The coecients h 1j still follow g coecient,
and the T ij coecients are still below the g coecients. T 0j coecients do
not change. However, comparing Table 2 with Table 1 shows that Table 2 has
one more line of T 2j coecients, in addition to T 1j . Moreover, there is a new
line given by the coecients h 2j following the empty box where
One may continue to apply the above procedure and execute the Maple program
to compute the SNF of the single zero singularity for the case a
a Tables similar to Tables 1 and 2 can be found. In gen-
eral, we may consider the following non-generic case.
General non-generic case: a As
usual, we assume a 11 6= 0. The results are listed in Table 3, where
and a k0 . This indicates that the SNF of system (20) for the general
non-generic case is
dy
d
The general table has the similar rule as that of Table 1 and 2: T 0j 's follow
follow the empty box where 's follow the non-zero g k , and
lines of coecients
Table
3. NT coecients for a k0 6=
Summarizing the above results yields the following theorem.
Theorem 3: For the system
dx
dt
a 1i i x +X
a 2i i x 2 +X
a
which has a zero singularity at the equilibrium
the rst non-zero coecients of a j0 's is a k0 , then the SNF of the system is
given by
dy
d
up to any order.
In the above we only discussed the case a 11 6= 0 which results in the unfolding
a 11 y. Other possible unfolding may not be so simple as this case. However,
they can be easily obtained by executing the Maple program. For example,
suppose a but a 13 6= 0 and a 21 6= 0, then the SNF is found to be
dy
d
up to any order.
Conclusions
An ecient method is presented for computing the simplest normal forms of
dierential equations involving perturbation parameters. The main advantages
of this approach are: (i) it provides an algorithm to compute the kth-order
algebraic equations which only contain the kth-order terms. This greatly save
computational time and computer memory. (ii) The nonlinear transformation
is given in a consistent form for the whole procedure. The zero singularity is
particularly considered using the new approach. It is shown that the SNF for
the single zero singularity with unfolding has a generic form which contains
only two terms up to any order.
Acknowledgment
This work was supported by the Natural Science and Engineering Research
Council of Canada (NSERC).
--R
Normal forms for singularities of vector
Unique normal forms for planar vector
Normal forms for nonlinear vector
Normal forms for nonlinear vector
Unique normal form of the Hamiltonian 1:2-resonance
Further reduction of the Takens-Bogdanov normal forms
Linear grading function and further reduction of normal forms
Simplest normal forms of Hopf and generalized Hopf bifurcations
Unique normal form of Bogdanov- Takens Singularities
Hypernormal forms for equilibria of vector
Hypernormal forms calculation for triple zero degeneracies
The simplest normal form for the singularity of a pure imaginary pair and a zero eigenvalue
Computation of simplest normal forms of di
Singularities of vector
--TR
--CTR
Pei Yu , Yuan Yuan, A matching pursuit technique for computing the simplest normal forms of vector fields, Journal of Symbolic Computation, v.35 n.5, p.591-615, May | nonlinear transformation;the simplest normal form SNF;rescaling;computer algebra;differential equation;lie transform;unfolding |
636361 | Approximate Nonlinear Filtering for a Two-Dimensional Diffusion with One-Dimensional Observations in a Low Noise Channel. | The asymptotic behavior of a nonlinear continuous time filtering problem is studied when the variance of the observation noise tends to 0. We suppose that the signal is a two-dimensional process from which only one of the components is noisy and that a one-dimensional function of this signal, depending only on the unnoisy component, is observed in a low noise channel. An approximate filter is considered in order to solve this problem. Under some detectability assumptions, we prove that the filtering error converges to 0, and an upper bound for the convergence rate is given. The efficiency of the approximate filter is compared with the efficiency of the optimal filter, and the order of magnitude of the error between the two filters, as the observation noise vanishes, is obtained. | Introduction
Due to its vaste application in engineering, the problem of filtering a random signal X t
from
noisy observations of a function h(X t ) of this signal has been considered by several authors.
In particular, the case of small observation noise has been widely studied, and several articles
are devoted to the research of approximate filters which are asymptotically efficient when the
observation noise vanishes. Among them, one notices a first group in which a one-dimensional
system is observed through an injective observation function h (see [4, 5, 1]); in this case, the
filtering error is small when the observation noise is small, and one can find efficient suboptimal
finite-dimensional filters. The multidimensional case appears later on with [6, 7], but an assumption
of injectivity of h is again required; in particular, the extended Kalman filter is studied in
When h is not injective, the process fX t g cannot always be restored from the observation of
so the filtering error is not always small; such a case is studied in [3]. However, there
are some classes of problems in which fX t
g can be restored from fh(X t
)g; in these cases, the
filtering error is small and one again looks for efficient suboptimal filters. For instance, fX t g
is sometimes obtained from fh(X t
)g and its quadratic variation, see [2, 8, 9, 11]. Here, we are
interested in another case in which h(X t ) is differentiable with respect to the time t, and fX t g
is obtained from fh(X t
)g and its derivative. More precisely, we consider the framework of [10]
which we now describe.
We consider the two-dimensional process X
given by the It"o equation
dx (1)
dx (2)
(1)
with initial condition X 0
and we are concerned by the problem of estimating the
signal
when the observation process is modelled by the equation
dy t
and f
are standard independent real-valued Wiener processes, and " is a small
nonnegative parameter. In particular, if f 1
, then x (1)
is the position of some moving
body on R, x (2)
is its speed, the body is submitted to a dynamical force described by f 2
and to
a random force described by oe, and one has a noisy observation of the position.
if the functions h and x 2
are injective, then the signal X t
can
(at least theoretically) be exactly restored from the observation; we are here interested by the
asymptotic case " ! 0, and we look for a good approximation of the optimal filter
This approximation should be finite dimensional (solution of a finite dimensional equation driven
by y t ).
The same problem has been dealt with in [10] (with oe constant), by means of a formal
asymptotic expansion of the optimal filter in a stationary situation. Our aim is to work out a
rigorous mathematical study of the filter proposed by [10], namely the solution M
of
with F 12
and with initial condition M 0
This filter does in fact correspond to
the extended Kalman filter with stationary gain if one neglects the contribution of the derivatives
of f other than @f 1
. The stability of this filter is not evident and requires some assumptions.
When it is stable, we prove in this work that
x (1)
and
x (2)
O(
We also verify that (6) can be improved when oe is constant, h is linear, and f 2
is linear with
respect to x 1
(this case will be referred to as the almost linear case). The proofs follow the
method of [7].
The contents are organized as follows. In Section 2, we introduce the assumptions which will
be needed in the sequel, and we study the filtering error as " converges to zero; more precisely,
we obtain the rate (5). In Section 3 the error between the approximate filter and the optimal
filter is studied, and we prove (6). Section 4 is devoted to the almost linear case.
Notations
The following notations are used:
"oe
F 22
and
are the Jacobian matrices of f and \Sigma;
is either a 2 \Theta 2 matrix (if \Phi is R 2 -valued) or a line-vector (if \Phi is
real-valued), see Section 3;
The symbol is used for the transposition of matrices.
When describing the behaviour of approximate filters, we will write asymptotic expressions
with the meaning given by the following definition.
Definition 1.1 Consider a real or vector-valued stochastic process f t
g. If fi is real and p 1,
we will write that
when for some q 0, ff ? 0 and some positive constants C 1 , C 2 , c 3 ,
e \Gammac 3 t=" ff
for t 0 and " small. In this situation the process f t g is usually said to converge to zero with
rate of order " fi , in a time scale of order " ff .
2 Estimation of X
The following assumptions will be used throughout this article. The last one depends on a
parameter ffi 1.
is a random variable, the moments of which are finite;
are standard independent Wiener processes independent from X 0
(H3) the function h is C 3 with bounded derivatives, and h 0 is positive;
(H4) the function f is C 3 with bounded partial derivatives and F 12
is positive;
(H5) the function oe is C 2 with bounded partial derivatives;
(H6.ffi) one hasffi
oe
F
for any x =
), and for some positive
oe,
H and
F .
We consider the system (1)-(2) and the filter (3). We let F t
be the filtration generated by
t be the filtration generated by (y t ). Assumption (H6.ffi) says that the system
does not contain too much nonlinearity; when it is not satisfied, there may be a small positive
probability for the filter to loose the signal (see [8] for a similar problem). This is a rather
restrictive condition, so we discuss at the end of the section the general case where it does not
hold.
Theorem 2.1 Assume (H1) to (H5). There exists some universal ffi ? 1 such that if (H6.ffi)
holds, then one has
x (1)
in L p for any p 1.
Before entering the proof of this result, we re-scale the system in order to reduce the notation
in (H6.ffi). If we replace the processes x (1)
t and y t by x (1)
t =oe and y t =(oe
F
the functions f 1
, oe and h are respectively replaced by
(oe
(oe
oe(oe
F
and " is replaced by " / (oe
F
H). We can apply the filter (3) to this new system, and we obtain
F
t =oe. This shows that the problem can be reduced to the case
Now consider a change of basis defined by a matrix T and its inverse T \Gamma1 , where
"=2
"=2
Then consider the process
We are going to check that Z t is the solution of a linear stochastic differential equation; the
study of the exponential stability of this equation will enable the estimation of both components
of Z t
, and the theorem will immediately follow.
An equation for Z t
From equations (1) to (3), we have
))dt
d
In this equation, we introduce the Taylor expansions for the functions f and h
and
h(x (1)
are R 2 and R valued processes depending on fX t g and fM t g, and
We obtain a linear equation for X t
. By applying the transformation (7), we deduce for Z t
an equation of the type
d
The precise computation shows that
A t
with
A (11)
A (12)
A (11)
A (21)
A (22)
and where e
A t is a 2 \Theta 2 matrix valued process which is uniformly bounded as " converges to 0;
similarly, the matrix-valued process U t is also uniformly bounded.
Stability of A t
A t
is the constant matrix
and
A t
A
In the general case ffi ? 1, the coefficients of
A t
A
can be controlled so that this matrix is
uniformly close to \Gamma2I if ffi is close to 1; in particular, if
2, one can choose ffi so
that
A
I
and therefore
I
if " is small.
End of the proof of Theorem 2.1
Our goal is now to deduce an estimate of Z t in L 2p for p integer. From It"o's Formula and (8),
the process kZ t
is solution of
d
We deduce that the moment of order p of kZ t k 2 is finite, and that
d
dt
From (9), one has
Z
As a consequence of the Cauchy-Schwarz inequality, one has
kU
Thus we obtain the inequality
d
dt
\Gammap
ff
Moreover, there exists C 0
such that
so
d
dt
By solving this differential inequality, one obtains that for some C 00
Thus Z t is O(" 1=4 ), and the order of magnitude of the components of X follows from (7)
and the form of T \Gamma1 . \Pi
We remark in (10) that the time scale of the estimation is of order
"; one can compare it
with the time scale " obtained when the observation function is injective (see for instance [5]).
This means that it takes here more time to estimate the signal, and this is not surprising since
the second component of the signal is not well observed. There are also other systems where the
time scale is not the same for the different components of the signal (see [8]).
In Theorem 2.1, we need the assumption (H6.ffi) which is a restriction to the non linearity of
the system; otherwise it is difficult to ensure that the filter does not loose the signal (this problem
also occurs in [8]). Actually, we have chosen the filter (3) because it gives a good approximation
of "
next section), but it is not the most stable one. If in (4) we replace the processes
by constant numbers oe,
F and
H, we obtain a filter with constant
gain; we can again work out the previous estimations and prove that the result of Theorem 2.1
holds for this filter without (H6.ffi), as soon as
F
Thus we have two filters: a filter which is stable and tracks the signal under rather weak assump-
tions, and the filter (3) which seems more fragile but gives (under good stability assumptions) a
better approximation of the optimal filter.
3 Estimation of "
The main result contained in this section is Theorem 3.1, which states the rate of convergence
of the approximate filter considered in this paper towards the optimal filter. In order to give
a proof of this theorem a sequence of steps are needed: a change of probability measure, the
differentiation with respect to the initial condition and an integration by parts formula. A similar
method of proof is adopted in [7]. As in Theorem 2.1, we may have a problem of stability if the
general non linear case.
Theorem 3.1 Assume (H1) to (H6.ffi) and
(H7) The law of X 0
has a C 1 positive density p 0
with respect to the Lebesgue measure
and rp 0
) is in L 2 .
If ffi in (H6.ffi) is close enough to 1, then the filter M t given by equation (3) satisfies
x (1)
O(
in L 2 .
The rest of this section is devoted to the proof of this theorem. We suppose as in Theorem
2.1 that
1. Consider the matrix
which depends only on M t
. Notice that P t
is the solution of the stationary Riccati equation
e
F (M t
)\Sigma (M t
with
e
and that the process R t of (4) is
We will also need the inverse of P t
namely
Change of probability measure
Our random variables can be viewed as functions of the initial condition X 0
and of the Wiener
processes w and
w. We are going to make a change of variables; in view of the Girsanov theorem,
this can be viewed as a change of probability measure; however, all the estimations will be made
under the original probability P . Thus consider the new probability measure which is given on
F t by
d
dP
where
ae
\Gamma"
Z th(x (1)
s
Z th 2 (x (1)
s
ds
oe
The probability
P is the so-called reference probability, and one checks easily from the Girsanov
Theorem that y t =" and w t are standard independent Wiener processes under
. Let us define
now the probability measure e
P on F t
by
d
where
s
s
ds
oe
Then, the processes
e
Z t\Sigma (M s
s
ds
and y t =" are standard independent Wiener processes under e
P . On the other hand, one has
and
log(L t
Z th(x (1)
s
Z th 2 (x (1)
s
ds \Gamma
Z t\Sigma (M s
s
s
Differentiation with respect to the initial condition and an estimation
The random variables involved in our computation can now be viewed as functions of
and fy t g; let us denote by r 0
the differentiation with respect to the initial condition X 0
puted in L p ). In particular, we can see on (13) and (14) that the processes X t
and log(L t
are differentiable, and we obtain respectively matrix and vector-valued processes. Our aim is to
estimate the process
log(L t
U
with
Then an integration by parts will enable to conclude.
By applying the operator r 0
to (14), one gets
log(L t
Z th 0 (x (1)
s
x (1)
s
(dy s
s
s
s
="
Z th 0 (x (1)
s
x (1)
s
d
s
We can also differentiate (13), and if \Sigma 0 is the Jacobian matrix of \Sigma, we obtain
The matrix r 0
X t is invertible and It"o's calculus shows that
From this equation and (16) one can write that:
d
log(L t t )(r 0
="
log(L t t )(r 0
log(L t t )(r 0
since one has h 0 (x (1)
x (1)
On the other hand, from the equations of X t and M t (equations (1) and (3), respectively),
one has
By writing the differential of P
in the form
we obtain
d
dt
d
d
J (2)
One can write the Taylor expansions for f and h
h(x (1)
with
By using these expansions together with the consequence of (12)
in equation (19), we obtain
d
dt
d
d
J (2)
dt
By adding this equation to (18), we obtain that the process V t
of (15) satisfies
)d
d
d
J (2)
dt (20)
is the matrix given by
Consider also the matrix-valued process
A t
)\Sigma (M t
Then the equation (20) can be written in the form
where
U
\Gamma"R
J (2)
="
U
U
(apply (12) for the last line). We deduce that E[kV 0
is of order " \Gamma3 , and that
d
dt
We have to estimate the terms of the right-hand side.
By computing the matrix A t , we obtain that
A t
A t
with
A (11)
A (21)
A (12)
A (22)
and e
A t is uniformly bounded. As in the proof of Theorem 2.1, we see that if then the
matrix
A t is simply
A t
which satisfies
A
Thus, for
2, when ffi is close enough to 1, and when " is small enough, we have
ff
We also notice that
ffp
and that
U
)U is bounded. Thus (24) implies that for " small,
d
dt
Let us first estimate J (3)
. We deduce from the Riccati equation (11) satisfied by P t that the
process S t
defined in (21) satisfies
By computing this matrix and applying Theorem 2.1, we check that
in the spaces L p . Thus
The term \Sigma (M t
)U is easily shown to have the same order of magnitude. On the
other hand, by looking at the equation of M t and by applying It"o's formula, we can prove that
for any C 2 function ae with bounded derivatives, one has
By applying this result to the functions involved in P \Gamma1
, it appears that
J (1)
We deduce that the terms of J (3)
involving J (1)
and J (2)
are also of order " \Gamma1 . Finally, OE t
and
are respectively of order " 1=2 and " 3=2 , so the last term is of order " \Gamma1 , and we deduce that
We can also estimate J (4)
and J (5)
and check that they are of order " \Gamma3=4 . Thus (26) enables
to conclude that
in L 2 . We can take the conditional expectation with respect to Y t in this estimation, because
the conditional expectation is a contraction in L 2 ; thus E[V t
is O(1=
") in L 2 , and therefore,
we obtain from the definition (15) that
log(L t t )(r 0
Application of an integration by parts formula
The estimation of the right-hand side of (27) can be completed by means of an integration by
parts formula. It is proved in Lemma 3.4.2 of [7] that if
w; y) is a functional defined
on the probability space which is differentiable with respect to the initial condition (in the spaces
iis the differentiation with respect to the ith component of X 0
, then
0: (28)
We can write equation (17) in the form
)\Sigma (M t
dt
)d e
with (r 0
This equation can be differentiated with respect to X 0
, so we can apply
the integration by parts formula (28) to the coefficients of the matrix (r 0
its ith line. Then
(p \Gamma1@p 0
0:
By summing on i and multiplying by U , we have
log(L t
r i(r 0
U
0: (30)
The first term of (30) is exactly the term that we want to estimate in (27).
For the second term of (30), if
we have from (17) and (22) that
We proceed as in the study of (23). The stability of the matrix A t
which has been obtained in
(25) and the boundedness of U implies that (r 0
exponentially small in L 2 , so
the second term is negligible.
Let us study the third term of (30). If
then by differentiating (29) and transforming back e
w into w, we get
U
\Gamma\Sigma (M t )P \Gamma1
with
By summing on i and using
@ae
is the jth line of ae, we obtain that \Phi t
is solution of
\Gamma\Sigma (M t
where oe 0 is the Jacobian of oe. A computation shows that
The multiplication on the right by U yields a process of
is also O(" \Gamma1 ), and the term involving the second derivative of oe is O(" \Gamma1=2 ). By proceeding
again as in the study of (23), we deduce that \Phi t is of order " \Gamma1=2 .
Thus (27), (30) and the estimation of \Psi t
and \Phi t
yield
We multiply on the right by the matrix U
, the coefficients of which are of order " 3=2 for
the first column and " for the second column, and we deduce the order of "
which was
claimed in the Theorem. \Pi
4 An almost linear case
It is interesting to consider a particular case in which oe, h 0 and F 12
are constant, so that the
system (1)-(2) is 8
dx (1)
x (2)
dx (2)
dy
In particular, (H6.ffi) holds with it is possible to improve the upper bounds given
in Theorem 3.1. The result is stated in the following proposition.
Proposition 4.1 Assuming that (H1) to (H7) hold for (32), the filter M t given by equation
verifies
x (1)
in L 2 .
Proof The proof follows closely the sequence of steps adopted in Theorem 3.1. The matrices
R are now constant; the processes J (1)
, J (2)
and J (5)
are zero. The order
of S t is improved into
and
so that
is of order " \Gamma3=4 . Thus V t is O(" \Gamma1=4 ) and we obtain O(" \Gamma1=4 ) in (27).
For the end of the proof, we see that
so
is bounded. Multiplication by U yields a process of order " \Gamma1=2 , so the process \Phi t
of (31) is
bounded for " small. We can conclude that
and deduce the proposition. \Pi
--R
On some approximation techniques in non linear filtering theory
Piecewise monotone filtering with small observation noise
Approximate filter for the conditional law of a partially observed process in nonlinear filtering
Asymptotic analysis of the optimal filtering problem for one-dimensional diffusions measured in a low noise channel
Nonlinear filtering of one-dimensional diffusions in the case of a high signal-to- noise ratio
Nonlinear filtering and smoothing with high signal-to-noise ratio
Efficiency of the extended Kalman filter for non linear systems with small noise
Estimation of the quadratic variation of nearly observed semimartingales with application to filtering
Filtrage lin'eaire par morceaux avec petit bruit d'observation
Asymptotic analysis of the optimal filtering problem for two dimensional diffusions measured in a low noise channel
Nonlinear filtering and control of a switching diffusion with small observation noise
--TR | stochastic differential models;approximate filters;nonlinear filtering |
636409 | An analog characterization of the Grzegorczyk hierarchy. | We study a restricted version of Shannon's general purpose analog computer in which we only allow the machine to solve linear differential equations. We show that if this computer is allowed to sense inequalities in a differentiable way, then it can compute exactly the elementary functions, the smallest known recursive class closed under time and space complexity. Furthermore, we show that if the machine has access to a function f(x) with a suitable growth as x goes to infinity, then it can compute functions on any given level of the Grzegorczyk hierarchy. More precisely, we show that the model contains exactly the nth level of the Grzegorczyk hierarchy if it is allowed to solve n - 3 non-linear differential equations of a certain kind. Therefore, we claim that, at least in this region of the complexity hierarchy, there is a close connection between analog complexity classes, the dynamical systems that compute them, and classical sets of subrecursive functions. | INTRODUCTION
The theory of analog computation, where the internal states of a computer
are continuous rather than discrete, has enjoyed a recent resurgence
of interest. This stems partly from a wider program of exploring alternative
approaches to computation, such as quantum and DNA computation;
partly as an idealization of numerical algorithms where real numbers can
be thought of as quantities in themselves, rather than as strings of digits;
and partly from a desire to use the tools of computation theory to better
classify the variety of continuous dynamical systems we see in the world
(or at least in its classical idealization).
However, in most recent work on analog computation (e.g. [BSS89, Mee93,
Sie99, Moo98]) time is still discrete. Just as in standard computation the-
ory, the machines are updated with each tick of a clock. If we are to make
the states of a computer continuous, it makes sense to consider making its
progress in time continuous too. While a few eorts have been made in the
direction of studying computation by continuous-time dynamical systems
[Moo90, Moo96, Orp97b, Orp97a, SF98, Bou99, CMC00], no particular set
of denitions has become widely accepted, and the various models do not
seem to be equivalent to each other. Thus analog computation has not yet
experienced the unication that digital computation did through Turing's
work in 1936.
In this paper, as in [CMC00], we take as our starting point Claude Shan-
non's General Purpose Analog Computer (GPAC). This was dened as
a mathematical model of an analog device, the Dierential Analyser, the
fundamental principles of which were described by Lord Kelvin in 1876
[Kel76]. The Dierential Analyser was developed at MIT under the supervision
of Vannevar Bush and was indeed built in for the rst time in 1931
[Bow96]. Its input was the rotation of one or more drive shafts and its
output was the rotation of one or more output shafts. The main units were
interconnected gear boxes and mechanical friction wheel integrators.
Just as polynomial operations are basic to the Blum-Shub-Smale model
of analog computation [BSS89], polynomial dierential equations are basic
to the GPAC. Shannon [Sha41] showed that the GPAC generates exactly
the dierentially algebraic functions, which are unique solutions of polynomial
dierential equations. This set of functions includes simple functions
like e x and sin x as well as sums, products, and compositions of these, and
solutions to dierential equations formed from them such as f
Pour-El [Pou74], and later Lipshitz and Rubel [LR87], extended Shannon's
work and made it rigorous.
The GPAC also corresponds to the lowest level | we denote here by
G | in a theory of recursive functions on the reals proposed by Moore
[Moo96]. There, in addition to composition and integration, a zero-nding
operator analogous to the minimization operator of classical recursion
theory is included. In the presence of a liberal semantics that denes
even when f is undened on x, this permits contraction
of innite computations into nite intervals, and renders the arithmetical
and analytical hierarchies computable through a series of limit processes
similar to those used by Bournez [Bou99]. However, such an operator is
clearly unphysical, except when the function in question is smooth enough
for zeroes to be found in some reasonable way.
In [CMC00] a new extension of G was proposed. The operators of the
GPAC were kept the same | integration and composition | but piecewise-
analytic basic functions were added, namely k
the Heaviside step function,
Adding one of these functions, k for some xed k, as an 'oracle' can be
thought of as allowing an analog computer to measure inequalities in a
1)-times dierentiable way. These functions are also unique solutions of
dierential equations such as xy two boundary conditions
rather than just an initial condition, which is a slightly weaker denition
of uniqueness than that used by Pour-El to dene GPAC-computability.
By adding these to the set of basic functions, for each k we get a class we
denote by G
A basic concern of computation theory is whether a given class of functions
is closed under various operations. One such operation is iteration,
where from a function f(x) we dene a function F (x;
applied t times to x, for t 2 N. The main result of [CMC00] is that G
is closed under iteration for any k 0, while G is not. (Here we adopt the
convention that a function where one or more inputs are integers is in a
given analog class if some extension of it to the reals is.) It then follows
that G includes all primitive recursive functions, and has other closure
properties as well.
To rene these results, in this paper we consider a restricted version of
Shannon's GPAC. In particular, we restrict integration to linear integra-
tion, i.e. solving linear dierential equations. We dene then a class of
computable functions L whose operators are composition and linear integration
and then add, as before, a basic function k for some xed k > 2.
The model we obtain, L+ k , is weaker than G . One of the main results
of this paper is that, for any xed k > 0, L+ k contains precisely the
elementary functions, a subclass of the primitive recursive functions introduced
by Kalmar [Kal43] which is closed under the operations of forming
bounded sums and products. Inversely, using Grzegorczyk and Lacombe's
denition of computable continuous real function [Grz55, Lac55], we show
that all functions in L+ k are elementarily computable for any xed k > 2,
and that if a function f 2 L+ k is an extension to the reals of some function
4~
f on the integers, then ~
f is elementary as well. 1 Thus we seem to have
found a natural analog description of the elementary functions, the smallest
known recursive class closed under time and space complexity [Odi00].
To generalize this further, we recall that Grzegorczyk [Grz53] proposed
a hierarchy of computable functions that straties the set of primitive recursive
functions. The elementary functions are simply the third level of
this hierarchy. We show that if we allow L to solve n 3 non-linear
dierential equations of a certain kind, then all functions in the nth level
of the Grzegorczyk hierarchy have extensions to the reals in the resulting
analog class. A converse result also holds.
Therefore, we claim that there is a surprising and elegant connection
between classes of analog computers on the one hand, and subclasses of
the recursive functions on the other. This suggests, at least in this region
of the complexity hierarchy, that analog and digital computation may not
be so far apart.
The paper is organized as follows. In Section 2 we review classical recursion
theory, the elementary functions, and the Grzegorczyk hierarchy.
In Section 3 we recall some basic facts about linear dierential equations.
Then, in Section 4 we dene a general model of computation in continuous
time that can access a set of 'oracles' or basic functions, compose these,
and solve linear dierential equations. We call this class L or more
generally for a set of oracles '.
We then prove bounds on the growth of functions denable in L
The existence of those bounds allows us to prove the main lemma of the
paper, which shows that L+ k is closed under forming bounded sums and
bounded products. With this, we are able to prove that L contains
extensions to the reals of all elementary functions. Inversely, we show that
all functions in are elementarily computable. This shows that the
correspondence between L+ k and the elementary functions is quite robust.
Then, in Section 5 we consider the higher levels in the Grzegorczyk hierarchy
by dening a hierarchy of analog classes Gn Each one of these
classes is dened similarly to L that our model is now allowed
to solve up to n 3 non-linear dierential equations, of a certain kind,
which produces iterations of functions previously dened. We then show
that this hierarchy coincides, level by level, with the Grzegorczyk hierarchy
in the same sense that L coincides with the elementary functions.
Finally, we end with some remarks and open questions.
1 The approach we follow [Ko91, Pou74, Wei00] to describe the complexity of real
functions is eective, in the sense that it extends standard complexity theory and relies
on the Turing machine as the model of computation to dene computability and
complexity. A distinct approach is to consider reals as basic entities as in [BSS89].
2. SUBRECURSIVE CLASSES OVER N AND THE
In classical recursive function theory, where the inputs and output of
functions are natural numbers N, computational classes are often dened
as the smallest set containing a set of initial functions and closed under certain
operations, which take one or more functions in the class and create
new ones. Thus the set consists of all those functions that can be generated
from the initial ones by applying these operations a nite number of
times. Exemples of common initial functions are zero, successor, projections
x y and 0 if x < y. Typical operations include (where ~x represents a
vector of variables, which may be absent):
1. Composition: Given an n-ary function f and a function g with n
2. Primitive recursion: Given f and g of the appropriate arity, dene h
such that h(~x;
3. Bounded sum: Given f(~x; y), dene h(~x;
z<y f(~x; z).
4. Bounded product: Given f(~x; y), dene h(~x;
z<y f(~x; z).
By starting with various sets of basic functions and demanding closure
under various properties, we can dene various natural classes. In partic-
ular, we will consider:
Primitive recursive functions are those that can be generated from zero,
successor, and projections using composition and primitive recursion.
Elementary functions are those that can be generated from zero, suc-
cessor, projections, addition, and cut-o subtraction using composition and
the operation of forming bounded sums and bounded products.
In classical recursive function theory more general objects called
which take an innite sequence and a nite
number of integers as input, can be dened. To dene a functional we
add a new operation A(; which accesses the x-th element of a
given innite sequence . When A is not used in the recursive denition
of , degenerates into a function. In particular we say that a functional
is elementary if it can be generated from the basic functionals
y, F (; x;
using composition, bounded sums and bounded prod-
ucts. We will need this notion of elementary functional in Proposition 4.4.
The class of elementary functions, which we will call E , was introduced by
Kalmar [Kal43]. As examples, note that multiplication and exponentiation
over N are both in E , since they can be written as a bounded sum and a
bounded product respectively:
z<y x and x
z<y x. Since E
is closed under composition, for each m the m-times iterated exponential
In fact, no
elementary function can grow faster than 2 [m] for some xed m, and many
of our results will depend on the following bound on their growth [Cut80]:
Proposition 2.1. If f 2 E, there is a number m such that, for all ~x,
The elementary functions are exactly the functions computable in elementary
time [Cut80], i.e., the class of functions computable by a Turing
machine in a number of steps bounded by some elementary function.
The class E is therefore very large, and many would argue that it contains
all practically computable functions. It includes, for instance, the
connectives of propositional calculus, functions for coding and decoding sequences
of natural numbers such as the prime numbers and factorizations,
and most of the useful number-theoretic and metamathematical functions
[Cut80, Ros84]. However, Proposition 2.1 shows that it does not contain
the iterated exponential 2 [m] (x) where the number of iterations m is a
variable, since any function in E has an upper bound where m is xed.
The iterated exponential is, however, primitive recursive. As a matter
of fact, it belongs to one of the lowest levels of the Grzegorczyk hierarchy
[Grz53, Ros84], which measures the structural complexity of the class of
primitive recursive functions, and we review below.
Let's rst recall what a LOOP-program is. It is a program which can
be written in a programming language with assignments, conditional and
FOR statements, but with no WHILE or GO-TO statements. Notice that
a LOOP-program always halts. The primitive recursive functions are precisely
the functions that are computed by LOOP-programs. We can stratify
them considering the subclasses of functions computable by LOOP-
programs with up to n nested FOR statements. For this gives the
elementary functions [HW99], and for n 2 this gives precisely E n+1 , the
1-th level of the Grzegorczyk hierarchy.
Originally [Grz53], the Grzegorczyk hierarchy was dened recursively.
The elementary functions are the third level of the Grzegorczyk hierarchy,
. For the following levels we consider a family of functions
N. These are, essentially, repeated iterations of the successor
function, and each one grows qualitatively faster than the previous one.
composing it yields functions as large as
2 [m] for any xed m. Iterating
In general, En+1
can be dened with
En 1 and closure under bounded sums and products (cf. [Odi00]):
(The Grzegorczyk hierarchy) For n 3, E n is the smallest class containing
zero, successor, the projections, cut-o subtraction, and En 1 , which
is closed under composition, bounded sum, and bounded product.
The union of all the levels of the Grzegorczyk hierarchy is the class PR
of primitive recursive functions, i.e.,
We can also generalize Proposition 2.1 and put a bound on the growth
of functions anywhere in the Grzegorczyk hierarchy:
Proposition 2.2. If n 2 and f 2 E n then there is an integer m such
that f(~x) E [m]
3. LINEAR DIFFERENTIAL EQUATIONS
An ordinary linear dierential equation is a dierential equation of the
where A(t) is a n n matrix whose entries are functions of t and ~ b(t) is a
vector of functions of t. If ~ we say that the system is homogeneous.
We can reduce a non-homogeneous system to a homogeneous one by introducing
an auxiliary variable xn+1 such that xn+1 t, that is,
which satises xn+1
The new matrix will just be
This matrix is not invertible, which makes (1) harder to solve. However,
since we don't need to solve the system explicitly, we prefer to consider the
homogeneous equation
as the general case in the remainder of the paper.
This leads to the original denition of the Grzegorczyk hierarchy (cf. [Clo99]).
The fundamental existence theorem for dierential equations guarantees
the existence and uniqueness of a solution in a certain neighborhood of an
initial condition for the system ~x when f is Lipschitz. For linear
dierential equations, we can strengthen this to global existence whenever
A(t) is continuous, and establish a bound on ~x that depends on kA(t)k. 3
Proposition 3.1 ([Arn96]). If A(t) is dened and continuous on an
interval I = [a; b], where a 0 b, then the solution of Equation 2 with
initial condition dened and unique on I. Furthermore, if
kA(t)k is non-decreasing this solution satises
Therefore, if A(t) is continuous and non-decreasing on R then solutions
of linear dierential equations are dened on arbitrarily large domains and
have an exponential bound on their growth that depends only on kA(t)k.
Proposition 3.1 holds both for the max norm,
and for the Euclidean norm [Har82]. If we use the max norm, which satises
when the conditions of Proposition 3.1 are fullled.
4. THE ANALOG CLASS L
FUNCTIONS
In [Moo96, CMC00] a denition of Shannon's General Purpose Analog
Computer (GPAC) in the framework of the theory of recursive functions
on the reals is given. We denote the corresponding set of functions by
G. It is the set of functions that can be inductively dened from the constants
0, 1, and 1, projections, and the operations of composition and
integration. Integration generates new functions by the following rule: if
f and g have appropriate arities and belong to G then the function h dened
by the initial condition h(~x; and the dierential equation
@ y h(~x; also belongs to G, over the largest interval containing
0 on which the solution is nite and unique. Thus G has the power
3 By k k we denote both the norm of a vector and the norm of a matrix, with
1g. By A being continuous, respectively increasing, we mean
that all entries of A are continuous, respectively increasing, functions.
to solve arbitrary initial value problems with unique solutions, constructed
recursively from functions already generated.
We dene here a proper subclass of G which we call L, by restricting the
integration operator to solving time-varying linear dierential equations.
To make the denition more general, we add a set of 'oracles' or additional
basic functions. Let ' be a set of continuous functions dened on R k for
some k. Then L+' is the class of functions of several real variables dened
recursively as follows:
Definition 4.1. A function h belongs to L its components
can be inductively dened from the constants 0, 1, 1, and , the
projections U i functions in ', and the following operators:
1.Composition: if a p-ary function f and a function g with p components
belong to L+', then dened as belongs to L+'.
2.Linear integration: if f and g are in L+' then the function h satisfying
the initial condition h(~x; solving the dierential equation
@ y h(~x;
belongs to L vector-valued with n components, then f has the
same dimension and g(~x; y) is an n n matrix whose components belong
to As shorthand, we will write
R
gh dy.
Several notes on this denition are in order. First, note that linear
integration can only solve dierential equations @ y gh where the right-hand
side is linear in h, rather than the arbitrary dependence @ y
which the GPAC is capable. Secondly, using the same trick as in Section 3
we can expand our set of variables, and so solve non-homogeneous linear
dierential equations of the form
@ y h(~x;
Finally, the reader will note that we are including as a fundamental
constant. The reason for this will become clear in Proposition 4.7. Unfor-
tunately, even though it is easy to show that belongs to G, we have not
found a way to derive using this restricted class of dierential equations.
Perhaps the reader can do this, or nd a proof that we cannot.
We will use the fact that, unlike solving more general dierential equa-
tions, linear integration can only produce total functions:
Proposition 4.2. All functions in L+' are continuous and are dened
everywhere.
Let us look at a few examples. Addition, as a function of two variables,
is in L since Similarly, multiplication
can be dened as
is in L since it can be dened as
by using either composition or linear integration,
with exp [2] (Note that
we are now using e rather than 2 as our base for exponentiation.)
Thus the iterated exponential exp [m] is in L for any xed m. However,
the function exp [n] (x), where the number of iterations is a variable, is
neither in L nor in G. We prove this in [CMC00], and use it to show that
Shannon's GPAC is not closed under iteration. However, if G is extended
with a function k , for k 0, then the resulting class G + k is closed under
iteration, where k is dened as follows.
is the Heaviside step function
Each k (x) can be interpreted as a function which checks inequalities such
as x 0 in a (k 1)-times dierentiable way (for k > 1). It was also shown
in [CMC00] that allowing those functions is equivalent to relaxing slightly
the denition of GPAC by solving dierential equations with two boundary
values instead of just an initial condition.
In this section we consider L and prove that for any xed k > 2
this class is an analog characterization of the elementary functions. We
will start by noting that all functions in L+ k have growth bounded by a
nitely iterated exponential, exp [m] for some m. This is analogous to the
bound on elementary functions in Proposition 2.1, and can easily be proved
by structural induction, using the bound in Proposition 3.1.
Proposition 4.3. Let h be a function in L+ k of arity m. Then there
is a constant d and constants A; B; C; D such that, for all ~x
j. The least d for which there are such A; B; C; D
will be called the degree of h or deg h.
Propositions 2.1 and 4.3 establish the same kind of bounds for E and
. But the relation between those two classes can be shown to
be much tighter: namely, all functions in can be approximated by
elementary functions, and all extensions to the reals of elementary functions
are contained in L
Since E is dened over the natural numbers and L is dened over
the reals, we rst need to set some conventions.
(Convention 1) We say that a function over the reals is elementary if it
fullls Grzegorczyk and Lacombe's denition of computable continuous real
function [Grz55, Grz57, Lac55, PR89] and if the corresponding functional
is elementary. First, we write S ; a if an integer sequence
N. Note that, by dividing
denition allows sequences of integers to converge to real
numbers. To dene computability of real functions that range over R with
elementary functions and functionals dened on N we use a simple bijective
encoding
and we say a real number a 2 R is
elementarily computable if there is an elementary function s
that Finally, a continuous real function f
is elementarily computable if there is an elementary functional which,
for all a 2 R and for all sequences : N ! N such that - ; a,
denotes the sequence
:g. The denition for vector-valued functions
and functions of several variables is similar.
The denition above can be extended in a straightforward manner to
-computability. This was already described in [Zho97], where f is said
to be E n -computable if: (1) f maps every E n -computable sequence of reals
into an E n -computable sequence of reals; and (2) there is a function d in E n
such that jx yj < 1=d(k) implies jf(x) f(y)j < 1=(k+1) for all x; y in any
bounded domain of f . We notice that time and space complexity for
functions, with n 3, can alternatively be dened using a function oracle
Turing machine as the model of computation [Ko91, Wei00]. For example,
a function f is elementary if f(x) can be computed with precision 1=n in
a number of steps elementary in jxj and n.
(Convention Conversely, we say that contains a function f
contains some extension of f to the reals, and similarly for
functions of several variables. These two conventions allow us to compare
analog and digital complexity classes.
Proposition 4.4. If f belongs to L
f is elementarily computable.
Proof. Once again, the proof will be done by structural induction. To
keep the notation simpler, we won't include the encoding function in
the proof. It is clear that the constants 0, 1, 1 and are elementarily
computable (e.c. for short). U i (~x) is simply x i and is obviously e.c., and
k (x) is e.c. since and polynomials are in E and the parity of x is computable
in E too.
For simplicity, we prove that composition of e.c. functions is also e.c. just
for real functions of one variable. The proof is similar for the general case.
If g is e.c. then there is a functional g such that g (;
sequence ; a, and if f is e.c. then there is a functional f such that
sequence ' ; b. Setting
f(g(a)). Since the composition of two elementary functionals is elementary,
we are done.
Finally, we have to show that if f and g are e.c. then h such that h(~x;
f(~x) and @ y h(~x; is also. This means that we have to
show that there is an elementary functional such that (;
for all sequences
We will do this using standard numerical techniques, namely Euler's
method. Let us suppose that h 2 L+ k is twice continuously dierentiable,
which is guaranteed if k > 2. 4 Fixing ~x and expanding h we obtain
for some where i < < i+1 . Since g is e.c. there is an elementary
functional g such that g (; To obtain
an estimate of the value of g on (~x; i ), we set
1). The
accuracy of this estimate depends on n since k
We will set below a lower bound for n.
The discrete approximation of h is then simply done by Euler's method,
where the i 's are the discretization steps, and we will show that we can
make the discretization error su-ciently small with an elementary number
of discrete steps. 5 We dene a function by
where the step size of the discretization are to be
xed by the number of steps of the numerical approximation. We dene
now an elementary functional . For each xed ~x and any sequence ; y,
(; l) is dened as being the integer closest to (l + 1) N , where N is a
suitably increasing elementary function of l and where N is obtained using
4 We don't study the particular cases since we are mainly interested in
the properties of L+ k for large k, i.e., when L+ k only contains \smooth" functions.
5 When xed point numerical calculations are used, there is also a round-o error.
However, in the worst case, the acumulated round-o error is of the same order of the
discretization error (cf. [Har82, 3.4.4] and [VSD86]). Therefore, we will only study the
discretization error of the numerical approximation.
a discretization step
(l
in (5). Note that then (; l) is always an integer as required by the
denition, and k(; l)=(l
We will now show how to choose n, m and N as functions of l such that
To prove that this
can be done in elementary time, we rst need to set a bound on
Since h 00 can be written as the sum and product of functions with bounds
of the form A exp [d] (Bk(~x; t)k), from Proposition 4.3, then kh 00 (~x; t)k is
bounded by A exp [d] (Bk(~x; t)k) for some d, A and B. We will call this
bound and we will denote by a number larger than (~x; y), for instance,
y, for any l and any ; y.
The discretization error is
and satises
Furthermore, because f is e.c.,
where f is the elementary functional that computes f . A little tedious
algebra shows then that
Therefore, given , which is elementary, it su-ces to set
to guarantee that k N
l. Note that, since n, m and N are elementary functions of l, can
be computed in elementary time. Therefore, by the triangle inequality,
Finally, we just have to show
how to dene from another elementary functional such that (;
be the integer closest to
1). is elementary since it is a composition of and
which is an elementary function. Since, for all l and ; y, kh(~x; y)
little more algebra allows us to show that
This concludes the proof.
As a corollary, any function in L that sends integers to integers is
elementary on the integers:
Corollary 4.5. If a function f 2 L is an extension of a function
~
f is elementary.
Proof. Proposition 4.4 shows us how to successively approximate f(x) to
within an error in an amount of time elementary in 1= and x. If f is an in-
teger, we just have to approximate it to error less than 1=2 to know its value
exactly.
Next we will prove the converse of this, i.e. that L contains all
elementary functions, or rather, extensions of them to the reals. We will
rst prove two lemmas.
Lemma 4.6. For any xed k > 0, L contains sin, cos, exp, the
constant q for any rational q, and extensions to the reals of successor,
addition, and cut-o subtraction.
Proof. We showed above that L+ k includes addition, and the successor
function is just addition by 1. For subtraction, we have x
@ y
We can obtain any integer by repeatedly adding 1 or 1. For rational
constants, by repeatedly integrating 1 we can obtain the function
z k =k! and thus k. We can multiply this by an integer
to obtain any rational q.
For cut-o subtraction x : y, we rst dene a function s(z) such that
Z. This
can be done in L by setting
R 1z k (1 z) k dz is a rational constant depending on k. Then
is an extension to the reals of cut-o subtraction.
Finally, dened by
with exp as proved above.
We now show that L has the same closure properties as E , namely
the ability to form bounded sums and products.
Lemma 4.7. Let f be a function on N and let g be the function on N
dened from f by bounded sum or bounded product. If f has an extension
to the reals in L does also.
Proof. For simplicity, we give the proof for functions of one variable.
We will abuse notation by identifying f and g with their extensions to the
reals.
We rst dene a step function F which matches f on the integers, and
whose values are constant on the interval j. F can
be dened as F is a function such that
and s 0
R 1=2sin k 2t dt is a constant
depending only on k. Since c k is rational for k even and a rational multiple
of for k odd, s is denable in L (Now our reasons for including
in the denition of L
The bounded sum of f is easily dened in L by linear integra-
tion. Simply write
Dening the bounded product of f in L + k is more di-cult. Let
us rst set some notation. Let f j denote f(j) for j 2 N, which is also
equal to F (t) for t 2
the bounded product we wish to dene. The idea of the proof is to approximate
the iteration g using synchronized clock functions as
in [Bra95, Moo96, CMC00]. However, since the model we propose here
only allows linear integration, the simulated functions cannot coincide exactly
with the bounded product. Nevertheless, we can dene a su-ciently
close approximation because f and g have bounded growth by Proposition
4.3. Then, since f and g have integer values, the accumulated error
resulting from this approximation can be removed with a suitable continuous
step function r simply dened by
returns the integer closest to t
as long as the error is 1=4 or less.
Now dene a two-component function ~y(; t) where y 1 (;
1 and
where () is an increasing function of . Then we claim that
r(y 1 (n; n)). We will see that if grows quickly enough, then by setting
we can make the approximation error jy 1 (n; n) g n j as small as we
like, and then remove the error by applying r.
Again, the idea is that on alternate intervals we hold either y 1 or y 2
constant and update the other one. For integer j, it is easy to see that
when the term k ( sin 2t) holds y 2 constant, and y 1
moves toward y 2 f(j). Quantitatively, solving (7) for t 2
Similarly,
when held constant and y 2 moves toward y 1 . This
gives us the recursion
Note that if () is su-ciently small then
Now let are
bounded from above by A exp [m] (Bn) as in Proposition 4.3. Below we
show that we can set for instance adjusting m, and that
for all m;n 1. Since this is less than 1=4, we can round the value of y 1
to the nearest integer using r, as claimed.
To conclude the proof, we show that jy 1 (n; n) g n j < 1=4 . Without loss
of generality, we will set the constants in the bound on f and g to A = 3
since A exp [m] (B) can always be bounded by 3 exp [m 0 ] () with
large enough. Also to simplify the notation, we will denote y 1 (n;
and y 2 (n; j) by y 1 (j) and y 2 (j), respectively. We will prove that jy 1 (n)
for all m;n 1. We will proceed by induction on j for
Recall that y 1
Equation 8 shows that jy 1 (1) f 0 j j1 f 0 j and that
We will now show
that if
and
then
and
for all j n 1, when f
First, note that from (9), (10) and the triangle inequality, we obtain
To prove (11) from (9) and (10) we use the recursion of Equation 8 and
the bounds on f j and g j . From (8) we have
From (9) we can write y 1
from (13), y 2 . Then (14) can be
rewritten as
which is
Therefore,
Since is small and is large then the rst term dominates the others
and, consequently, we have jy 1 (j
claimed. The proof that (9) and (10) imply (12) is similar.
Finally, we brie
y show that 2 4n (n which is always positive,
is smaller than 32:9:e 4 =e 3e 2
we obtain the
previous value. It is easy to verify that
decreases when m and n increase.
We illustrate this construction in Figure 1. We approximate the bounded
product of the identity function, i.e. the factorial (n
j<n j. We
y 1,2
FIG. 1. A numerical integration of Equation (7), where f is a L+ k function such
that 2. We obtain an approximation of an
extension to the reals of the factorial function. In this example, where we chose a small
< 4, the approximation is just su-cient to remove the error with and obtain exactly
(5)).
numerically integrated Equation (7) using a standard package (Mathemat-
ica).
An interesting question is whether L+ k is closed under bounded product
for functions with real, rather than integer, values. Our conjecture is that
it is not, but we have no proof of this.
From the previous two lemmas it follows that
Proposition 4.8. If f is an elementary function, then L+ k contains
an extension of f to the reals.
Taken together, Propositions 4.4 and 4.8 show that the analog class L+ k
corresponds to the elementary functions in a natural way.
5. Gn
In this section we show that we can extend the results of the previous
section to the higher levels of the Grzegorczyk hierarchy, E n for n 3.
Let us dene a hierarchy of recursive functions on the reals. Each level is
denoted by Gn 3. The rst level is G 3
and each following level is dened either by adding a new basic function, or
by allowing the system to solve a certain number of non-linear dierential
equations.
Definition 5.1 (The hierarchy Gn k be the
smallest class containing the constants 0, 1, 1, and , the projections, and
which is closed under composition and linear integration, and dened
up to n 3 applications of the following operator:
Non-linear integration (NLI): if a unary scalar funtion f belongs to Gn
then the solution of the initial value problem
with initial condition y 1 belongs to Gn
We will now show that Gn xed k > 2, corresponds to
the nth level of the Grzegorczyk hierarchy in the same way that
corresponds to the elementary functions. First, we will show the operator
NLI carries us up the levels of the Gn just as iteration does
for the
Proposition 5.2. For any function f 2 Gn there is an extension
of the iteration F (x;
Proof. F (x; t) is given by y 1 in the Equation (15) with the initial conditions
x. Note that jxj k can dened in L
it can be proved that, for all t 2 N, y 1
The function y 1 belongs to Gn+1 since it is dened from f , which is in
only one application of NLI. The dynamics for Equation (15)
is similar to Equation (7) for iterated multiplication, in which y 1 and y 2
are held constant for alternating intervals. The main dierence is that the
terms jcos tj k+1 and jsin tj k+1 on the left ensure that y 1 converges exactly
to f(y 2 ), and y 2 exactly to y 1 , by the end of the interval [n; n+1]. The term
k (t) on the right ensures that the solution is constant y 1
for t < 0. The proof is similar to the one given in [CMC00, Proposition 9].
Now dene a series of functions exp
for x 2 N. Since exp 2 2 L+ k , Proposition 5.2 shows that Gn contains
extensions to the reals of exp n 1 for all n 3. Since
tary, an extension of it to the reals belongs to L by Proposition 4.8,
and since En+1 is dened from En by iteration, Gn contains extensions
to the reals of En 1 for all n 3. For simplicity, we will also use
the notation exp n (x) and En (x) for monotone extensions of exp n and En
to x 2 R.
In fact, nite compositions of exp n 1 and En 1 put an upper bound on
the growth of functions in Gn so in analogy to Proposition 4.3 we have
the following:
Proposition 5.3. Let f be a function of arity m in Gn
3. Then there is a constant d and constants A; B; C; D such that, for
all x
kf(~x)k A exp [d]
j. Moreover, the same is true (with dierent con-
stants) if exp n 1 is replaced by En 1 .
Now, as before, to compare an analog class to a digital one we will say
that a real function is computable in E n if it can be approximated by a
series of rationals with a functional in E n , and we will say that an integer
function is in an analog class if some extension of it to the reals is. We will
now show that Gn xed k > 2, corresponds to the nth level
of the Grzegorczyk hierarchy in the same way that corresponds to
the elementary functions.
Proposition 5.4. The following correspondences exist between Gn
and the levels of the Grzegorczyk hierarchy, E n for all n 3:
1.Any function in Gn is computable in E n .
is an extension to the reals of some ~
f on N, then ~
3.Conversely, if f 2 E n then some extension of it to the reals is in Gn+ k .
Proof. To prove that functions in Gn can be computed with functionals
in E n , we follow the proof of Proposition 4.4 for composition and
linear integration. However, when we use Euler's method for numerical
integration, we now apply the bound of Proposition 5.3 and set m, n and
N to grow as AE [d]
for a certain d. Since this is in E n , so are the
functionals and . Numerical integration of functions in Gn dened
by Equation (15) can also be done in E n , although the numerical techniques
involved are slightly dierent since ~y 0 is dened implicitly. Now, as
in Corollary 4.5, if f takes integer values on the integers we just have to
approximate it to within an error less than 1=2, so the restriction of f to
the integers is in E n .
Conversely, the remarks above show that Gn contains an extension
to the reals of En 1 , and Lemma 4.6 shows that it contains the other initial
functions of E n as well. Furthermore, it is closed under bounded sum and
bounded product for integer-valued functions. The proof of Lemma 4.7
proceeds as before, except that using Proposition 5.3 again, () is now
an extension of this to the reals can be dened in Gn+ k ,
so can the linear dierential equation (7). We showed that Gn contains
extensions to the reals of all initial functions of E n and is closed under
composition and bounded sums and products for integer valued functions.
Therefore, contains extensions to the reals of all functions in E n .
A few remarks are in order. First, we stress that the analog model we
dene contains exactly the nth level of the Grzegorczyk hierarchy if it is
allowed to solve up to n 3 non-linear dierential equations of the form of
Equation (15) and no other non-linear dierential equations.
Secondly, notice that since [nE implies that
Corollary 5.5. Any function in [n (Gn computable in PR and
all primitive recursive functions are contained in [n (Gn
Finally, instead of allowing our model to solve Equation (15), we can keep
everything linear and dene Gn by adding a new basic function which
is an extension to the reals of En 1 . While this produces a smaller set of
functions on R, it produces extensions to R of the same set of functions on
N as the class dened here.
6. CONCLUSION
We have dened a new version of Shannon's General Purpose Analog
Computer in which the integration operator is restricted in a natural way |
to solving linear dierential equations. When we add the ability to measure
inequalities in a dierentiable way, the resulting system L+ k corresponds
exactly to the elementary functions E . Furthermore, we have dened a
hierarchy of analog classes Gn by allowing n 3 non-linear equations
of a certain form, and we have shown that this hierarchy corresponds, level
by level, to the Grzegorczyk hierarchy E n for n 3. When combined with
the earlier result [CMC00] that G contains the primitive recursive
functions, this suggests that subclasses of the primitive recursive functions
correspond nicely to natural subclasses of analog computers.
Several open questions suggest themselves:
1. We used a very specic kind of non-linear operator to dene the classes
in Section 5. Is there a more natural family of non-linear dierential
equations, whose solutions are total functions, which yield non elementary
2. Can we do without in the denition of L+ k ? Note that we do not
need to include it in the denition of Gn since we can dene
limited integration as
linear integration as
and nally set However, we have been unable to nd a way
to dene from linear integration alone.
3. Is L closed under bounded product for real-valued functions,
and not just integer-valued ones? We think this is unlikely, since it would
require some form of iteration like that in Equation 15 where y 1 and y 2
converge to the desired values exactly. We see no way to do this without
highly non-linear terms. If L+ k is not closed under real-valued bounded
products then we could ask what class would result from that additional
operation. While the set of integer functions which have real extensions in
the class would remain the same, the set of functions on the reals would be
larger.
4. By adding to our basis a function that grows faster than any primitive
recursive function, such as the Ackermann function, we can obtain transnite
levels of the extended Grzegorczyk hierarchy [Ros84]. It would be
interesting to nd natural analog operators that can generate such functions
5. How robust are these systems in the presence of noise? Since it is
based on linear dierential equations, L may exhibit a fair amount
of robustness to perturbations. We hope to quantify this, and explore
whether this makes these models more robust than other continuous-time
analog models, which are highly non-linear.
6. Our results on the Grzegorczyk hierarchy seem to be somehow related
to [Gak99], which framework is the BSS model of computation. In [Gak99]
the recursive characterization of the BSS-computable functions [BSS89] is
restricted to match the recursive denition of the classes E n . This might
suggest that our continuous-time operations on real functions, namely the
various forms of integration we consider, are related to certain restricted
types of BSS-machines.
It is interesting that linear integration alone, in the presence of k , gives
extensions to the reals of all elementary functions, since these are all the
functions that can be computed by any practically conceivable digital de-
vice. In terms of dynamical systems, L corresponds to cascades of
nite depth, each level of which depends linearly on its own variables and
the output of the level before it. We nd it surprising that such systems,
as opposed to highly non-linear ones, have so much computational power.
Finally, we note that while including k as an oracle makes these functions
non-analytic, by increasing k they can be made as smooth as we like.
Therefore, we claim that these are acceptable models of real physical phe-
3nomena, and may be more realistic in certain cases than either discrete or
hybrid systems.
ACKNOWLEDGMENTS
We thank Jean-Sylvestre Gakwaya, Norman Danner, Robert Israel, Kathleen Mer-
rill, Spootie Moore, Bernard Moret, and Molly Rose for helpful discussions, and the
anonymous referees for important suggestions for improvement. This work was partially
supported by FCT PRAXIS XXI/BD/18304/98 and FLAD 754/98. M.L.C. and J.F.C.
also thank the Santa Fe Institute, for hosting visits that made this work possible, and
LabMAC (Laboratorio Modelos e Arquitecturas Computacionais da FCUL).
--R
Equations Di
Achilles and the tortoise climbing up the hyper-arithmetical hierar- chy
Universal computation and other capabilities of hybrid and continuous dynamical systems.
On a theory of computation and complexity over the real numbers: NP-completness
Computational models and function algebras.
An Introduction to Recursive Function Theory.
Cambridge University Press
Extensions de la Hi
Some classes of recursive functions.
Computable functionals.
On the de
Ordinary Di
Matrix Analysis.
Complexity of primitive recursion.
Complexity Theory of Real Functions.
Extension de la notion de fonction r
Real number models under various sets of operations.
Unpredictability and undecidability in dynamical systems.
Recursion theory on the reals and continuous-time computation
Dynamical recognizers: real-time language recognition by analog com- puters
Classical Recursion Theory II.
On the computational power of continuous time neural networks.
A survey of continuous-time computation theory
Abtract computability and its relation to the general purpose analog computer.
Computability in Analysis and Physics.
Functions and Hierarchies.
Analog computation with dynamical systems.
Mathematical theory of the di
Neural Netwoks and Analog Computation: Beyond the Turing Limit.
The complexity of analog computation.
Mathematics and Computers in Simulation
Computable Analysis.
Subclasses of computable real functions.
--TR
The complexity of analog computation
Complexity theory of real functions
Universal computation and other capabilities of hybrid and continuous dynamical systems
Recursion theory on the reals and continuous-time computation
Dynamical recognizers
Achilles and the Tortoise climbing up the hyper-arithmetical hierarchy
Analog computation with dynamical systems
Neural networks and analog computation
Computable analysis
Iteration, inequalities, and differentiability in analog computers
Ordinary Differential Equations
U.S. Technological Enthusiasm and British Technological Skepticism in the Age of the Analog Brain
Subclasses of Coputable Real Valued Functions
The Computational Power of Continuous Time Neural Networks
--CTR
Giuseppe Trautteur , Guglielmo Tamburrini, A note on discreteness and virtuality in analog computing, Theoretical Computer Science, v.371 n.1-2, p.106-114, February, 2007
Manuel L. Campagnolo , Kerry Ojakian, The Methods of Approximation and Lifting in Real Computation, Electronic Notes in Theoretical Computer Science (ENTCS), 167, p.387-423, January, 2007
Jerzy Mycka , Jos Flix Costa, Real recursive functions and their hierarchy, Journal of Complexity, v.20 n.6, p.835-857, December 2004
Daniel Silva Graa , Jos Flix Costa, Analog computers and recursive functions over the reals, Journal of Complexity, v.19 n.5, p.644-664, October
Jerzy Mycka , Jos Flix Costa, The P NP conjecture in the context of real and complex analysis, Journal of Complexity, v.22 n.2, p.287-303, April 2006
John V. Tucker , Jeffery I. Zucker, Computability of analog networks, Theoretical Computer Science, v.371 n.1-2, p.115-146, February, 2007
Manuel Lameiras Campagnolo, Continuous-time computation with restricted integration capabilities, Theoretical Computer Science, v.317 n.1-3, p.147-165, June 4, 2004
Jerzy Mycka , Jos Flix Costa, A new conceptual framework for analog computation, Theoretical Computer Science, v.374 n.1-3, p.277-290, April, 2007
Olivier Bournez , Emmanuel Hainry, Recursive Analysis Characterized as a Class of Real Recursive Functions, Fundamenta Informaticae, v.74 n.4, p.409-433, December 2006
Olivier Bournez , Emmanuel Hainry, Elementarily computable functions over the real numbers and R-sub-recursive functions, Theoretical Computer Science, v.348 n.2, p.130-147, 8 December 2005 | elementary functions;dynamical systems;subrecursive functions;recursion theory;analog computation;primitive recursive functions;grzegorczyk hierarchy;differential equations |
636581 | A polytopal generalization of Sperner''s lemma. | We prove the following conjecture of Atanassov (Studia Sci. Math. Hungar. 32 (1996), 71-74). Let T be a triangulation of a d-dimensional polytope P with n vertices v1, v2,...., vn. Label the vertices of T by 1,2,..., n in such a way that a vertex of T belonging to the interior of a face F of P can only be labelled by j if vj is on F. Then there are at least n - d full dimensional simplices of T, each labelled with d different labels. We provide two proofs of this result: a non-constructive proof introducing the notion of a pebble set of a polytope, and a constructive proof using a path-following argument. Our non-constructive proof has interesting relations to minimal simplicial covers of convex polyhedra and their chamber complexes, as in Alekseyevskaya (Discrete Math. 157 (1996), 15-37) and Billera et al. (J. Combin. Theory Ser. B 57 (1993), 258-268). | Introduction
Sperner's Lemma is a combinatorial statement about labellings of triangulated
simplices whose claim to fame is its equivalence with the topological xed-point
theorem of Brouwer [8, 16]. In this paper we prove a generalization of Sperner's
Lemma that settles a conjecture proposed by K.T. Atanassov [2].
Consider a convex polytope P in R d dened by n vertices d . For
brevity, we will call such polytope an (n; d)-polytope. Throughout the paper we
will follow the terminology of the book [20]. By a triangulation T of the polytope
P we mean a nite collection of distinct simplices such that: (i) the union of the
simplices of T is P , (ii) every face of a simplex in T is in T , and (iii) any two
simplices in T intersect in a face common to both. The points v are called
vertices of P to distinguish them from vertices of T , the triangulation. Similarly, a
simplex spanned by vertices of P will be called a simplex of P to distinguish it from
simplices involving other vertices of T . If S is a subset of P , then the carrier of S,
denoted carr(S), is the smallest face F of P that contains S. In that case we say S
is carried by F . A cover C of a convex polytope P is a collection of full dimensional
simplices in P such that [ 2C . The size of a cover is the number of simplices
in the cover.
2000 Mathematics Subject Classication. Primary 52B11, Secondary 55M20.
Key words and phrases. Sperner's lemma, polytope, path-following, simplicial algorithms.
[ Department of Mathematics, University of California Davis, Davis, CA 95616,
[email protected].
Magdalen College, Oxford University, Oxford OX1 4AU, U.K.,
[email protected].
\ Department of Mathematics, Harvey Mudd College, Claremont, CA 91711, U.S.A.,
[email protected].
J. A. DE LOERA, E. PETERSON, AND F. E. SU
Let T be a triangulation of P , and suppose that the vertices of T have a labelling
satisfying these conditions: each vertex of P is assigned a unique label from the set
and each other vertex v of T is assigned a label of one of the vertices
of P in carr(v). Such a labelling is called a Sperner labelling of T . A d-simplex in
the triangulation is called a fully-labeled simplex or simply a full cell if all its labels
are distinct. The following result was proved by Sperner [16] in 1928:
Sperner's Lemma. Any Sperner labelling of a triangulation of a d-simplex must
contain an odd number of full cells; in particular, there is at least one.
Constructive proofs of Sperner's lemma [4, 9, 12] emerged in the 1960's, and
these were used to develop constructive methods for locating xed points [18, 19].
Sperner's lemma and its variants continue to be useful in applications. For example,
they have recently been used to solve fair division problems in game theory [14, 17].
The main purpose of this paper is to present a solution of the following conjecture.
Conjecture (Atanassov). Any Sperner labelling of a triangulation of an (n; d)-
polytope must contain at least (n d) full cells.
In 1996, K.T. Atanassov [2] stated the conjecture and gave a proof for the case
2. Note that Sperner's lemma is exactly the case 1. In this
paper we prove this conjecture for all (n; d)-polytopes. Here is the central result of
our paper:
Theorem 1. Any Sperner labelling of a triangulation T of an (n; d)-polytope P
must contain at least (n d) full cells. Moreover, the collection of full cells in T
corresponds to a cover of P under the piecewise linear map that sends each vertex
of T to the vertex of P that shares the same label.
We provide a non-constructive and a constructive proof of Theorem 1. The non-constructive
proof in Section 2 is obtained via a degree argument and the notion
of a pebble set. Section 3 develops background on path-following arguments in
polytopes that is closely related to classical path-following arguments for Sperner's
lemma [4, 9, 18]. This is applied to give a constructive proof for simplicial poly-
topes. In Section 4 we extend the construction to prove the conjecture for arbitrary
polytopes. The nal section of the paper is devoted to two interesting consequences
of Theorem 1 and its proofs. From our rst proof we derive the following corollary:
Corollary 2. Let c(P ) denote the covering number of an (n; d)-polytope P , which
is the size of the smallest cover of P . Then, c(P ) n d. This result is best
possible as the equality is attained for stacked polytopes.
We also obtain a slight strengthening of Theorem 10 of [10]. We need to recall
the notion of chamber complex of a polytope P (see [1]): let be the set of all
d-simplices of P . Denote by bdry() the boundary of simplex . Consider the
set of open polyhedra P [ 2 bdry(). A chamber is the closure of one of these
components. The chamber complex of P is the polyhedral complex given by all
chambers and their faces.
Corollary 3. Let P be an (n; d)-polytope with vertices
ng be a collection of closed sets covering the (n; d)-polytope P , such that each
face F is covered by [fC h j v h 2 Fg.
Then, for each p 2 P , there exists a subset J p ng such that (1) p
lies in the convex hull of the vertices v j with j 2 J p , (2) J p has cardinality d
A POLYTOPAL SPERNER'S LEMMA 3
(3) \ j2Jp C j 6= ;, and (4) if p and q are interior points of the same chamber of P ,
then . There are at least c(P ), the covering number, dierent such subsets,
and the simplices of P indicated by the labels in these subsets form a cover of P .
Figure
1 illustrates with an example the content of the above corollary.
Figure
1. Part (A) shows several closed sets covering a hexagon
and their four intersection points. The points of intersection correspond
to a cover of the hexagon, in this case a triangulation,
illustrated in part (B).
2. A Non-Constructive Proof using Pebble Sets
The non-constructive proof of Theorem 1 that we give in this section is an extension
of \degree" arguments for proving the usual Sperner's lemma. We rst
establish a proposition that yields the covering property, then show how the construction
a pebble set will yield a lower bound for the number of full cells.
Let P be an (n; d)-polytope with Sperner-labelled triangulation T . Consider the
piecewise linear (PL) maps each vertex of T to the vertex of
P that shares the same label, and is linear on each d-simplex of T .
Proposition 4. The dened as above is surjective, and thus the
collection of full cells in T forms a cover of P under f .
Before proving this result, we recall a few facts about the degree of a map f
between manifolds. If y and the Jacobian determinant of f exists and is
non-zero at x, then x is called a regular point of f , and the sign of x is the sign (1)
of this determinant. The point y is called a regular value of f if every pre-image
of y is a regular point. Any regular value y has nitely many pre-images, and the
sum of the signs of its pre-images is, in fact, independent of the choice of y and is
known as the degree of the map f .
The degree is a homotopy invariant of mappings between manifolds (relative to
their boundaries) and may also be computed as the multiplicative factor induced by
the map f on the corresponding top homology groups (relative to their boundaries).
See [5, Ch.1] or [13, Sec.38] for expositions of the topological degree of simplicial
maps, or [7] for the general theory.
4 J. A. DE LOERA, E. PETERSON, AND F. E. SU
Thus for the above, the interior points of simplices of T
are regular points; interior points of chambers of P are regular values. Observe that
the sign of a regular point x depends essentially on the orientation of the labels of
the simplex of T that contains x.
Proof. We shall show, by induction on the dimension d of P , that f has degree 1.
If then P is a point and the statement is clearly true. So assume that the
above statement holds for all polytopes of dimension less than d.
Given a d-dimensional polytope P , let @P denote the boundary of P (i.e., the
union of the facets). If consists of two points on which the map
is the identity (due to the Sperner labelling), and therefore @f :
is the identity map on (reduced) homology groups. For
d > 1, we use the Sperner labelling on the facets and the inductive hypothesis to
show that the map @f is the identity map. Specically, if F is any facet of P ,
. Then the map @f induces the following commutative diagram of
(reduced) homology groups:
where the rows are exact. Since H i 2g, the maps
are isomorphisms. By excision, H d 1 (@P;
f to be induced by f on the facet F (a polytope of one lower dimension)
which by the inductive hypothesis has degree 1 and must therefore be the identity.
Since the maps are isomorphisms and f is the identity map, @f
also the identity map.
The map @f appears in another commutative diagram of (reduced) homology
groups induced by f :
where the rows are exact. Since H 1g, the maps @ are
isomorphisms, which implies that f is the identity map. Hence f has degree 1.
Therefore the number of pre-images of any regular point y in the image of f is
(when counted with sign). So the map f is surjective, and full cells in T gives a
cover of P under f .
Thus if we can nd a set of points in P such that any d-simplex spanned by
vertices of P contains at most one such point in its interior, then the pre-image
of each such point will correspond to a full cell in P (in fact, an odd number,
because the number of pre-images, counted with sign, is 1). Thus nding full cells
in Theorem 1 corresponds precisely to looking for the following kind of nite point
Denition. A pebble set of a (n; d)-polytope P is a nite set of points (pebbles)
such that each d-simplex of P contains at most one pebble in its interior.
It is worth remarking now two facts about pebble sets. First, the larger the
pebble set, the more full cells we can identify, i.e., the number of full cells is at
A POLYTOPAL SPERNER'S LEMMA 5
least the cardinality of the largest size pebble set in P . Second, by the denition of
chamber, only one pebble can exist within a chamber and when choosing a pebble p
we have the freedom to replace it by any point p 0 in the interior of the same chamber
because p and p 0 are contained by the same set of d-simplices. We now show that
a pebble set of size (n d) exists for any (n; d)-polytope P by a \facet-pivoting"
construction.
Figure
2. A pebble set with pebbles
In the simplest situation, if one of the facets of P is a simplex, call this simplex
the base facet. Choose any point q 0 (the basepoint) in the interior of this base facet.
Now for each vertex v i not in the base facet, choose a point p i along a line between
very close to v i . Exactly how close will be specied in the proof.
The collection of all such points fp i g forms a pebble set; it is size (n d) because
the simplicial base facet has d vertices. See, for example, Figure 2 for the case of a
pentagon; it is a (5; 2)-polytope with pebble set fp 1 g.
If none of the facets are simplicial, then one must choose a non-simplicial facet
as base. In this case, choose a pebble set fq i g for the base facet (an inductive
hypothesis is used here) and then use any one of them for a basepoint q 0 to construct
above. The remaining pebbles are obtained from the other q i by perturbing
them so they are interior to P . See Figure 3.
Theorem 5. Any (n; d)-polytope contains a pebble set of size (n d).
Proof. We induct on the dimension d. For dimension a polytope is just a
line segment spanned by two vertices. Hence n clearly any point in the
interior of the line segment forms a pebble set.
For any other dimension d, let V =fv 1 ; :::; vn g 2 R d denote the vertices of the
given (n; d)-polytope P . Choose any facet F of P as a \base facet", and suppose
without loss of generality that it is the convex hull of the last k vertices
d. Then F is a (d 1)-dimensional polytope with k
vertices, and by the inductive hypothesis, F has a pebble set QF with
pebbles . (If F is a simplex, then consists of one point
which can be taken to be any point on the interior of F .)
Let diam(P ) denote the diameter of the polytope P , i.e., the maximum pairwise
distance between any two points in P . Let H be the minimum distance between
6 J. A. DE LOERA, E. PETERSON, AND F. E. SU
any vertex v 2 V and the convex hull of the vertices in V n fvg. Since there are
nitely many such distances and the vertices are in convex position, H exists and
is positive. Set
(1)
Using denote the collection of (n points dened
by
(2)
positive constant given by (1). Thus points in
along straight lines extending from q 0 and very close to the vertices of P not
in F .
Figure
3. A pebble set with pebbles
. The pebbles
lie just above (not shown) on the base of the
polytope. Note how q arise from the pebble set construction
in Figure 2.
Because q i is in F , it lies on the boundary of P and borders exactly one chamber
of P (since by induction it is interior to a single chamber in the facet F ). Ignoring
momentarily, for 1
i denote a point obtained by \pushing" q i
into the interior of the unique chamber that it borders. Let Q
g.
We shall show that
is a pebble set for the polytope P . Note that if P has a simplicial facet F , then with
this facet as base, the set Q suces; for then Q
F is empty and the construction
(2) yields the required number of pebbles by choosing any q 0 in the interior of a
simplicial facet F .
First we prove some important facts about the p i and q
Lemma 6. Let S be a d-simplex spanned by vertices of P . If S contains p i , it
must also contain v i as one of its vertices.
A POLYTOPAL SPERNER'S LEMMA 7
Proof. By construction, each p i has the property that p i is not in the convex hull
of g. This follows because
hence implying that the distance of p i from the convex hull of
is greater than or equal to H=2.
Since the convex hull of V n fv i g does not contain p i , if S is to contain p i it
must contain v i as one of its vertices.
Lemma 7. Let S be a non-degenerate d-simplex spanned by vertices of P . Then
i is in S if and only if q i is in S \ F .
Proof. Since q
i is in the unique chamber of P that q i borders, any non-degenerate
simplex containing q i must contain q
. Conversely, any simplex S containing q
must contain its chamber and therefore contains q i . Since q i is in F , then q i is in
The next three lemmas will show that Q [ Q
F is a pebble set for P .
Lemma 8. Any d-simplex S spanned by vertices of P contains no more than one
pebble of Q.
Proof. If S is degenerate (i.e., the convex hull of those vertices is not full dimen-
sional), then it clearly contains no pebbles of Q because the p i are by construction
in the interior of a chamber. So we may assume that S is non-degenerate.
denote the vertices of S. Suppose by way of contradiction
that S contained more than one point of Q. Then are contained in S
for distinct implies that v i must both
be vertices of S. Without loss of generality, let s A be a
matrix whose columns consist of q 0 and the vertices of S, adjoined with a row of
1's:
This is a (d+1)(d+2) matrix that has rank (d+1) because the s i are anely independent
(by the non-degeneracy of S). So the kernel of A, ker(A) is 1-dimensional.
Note that p i 0 2 S implies that it is a convex combination of the rst (d+1) columns
of A. On the other hand, by construction, it is also a convex combination of s 1 and
. Thus there exist constants 0 x
x d+1
8 J. A. DE LOERA, E. PETERSON, AND F. E. SU
where the rst equality follows from (2). Similarly,
y d+13
for some constants 0 y 1. The above equations show that
are both in ker(A).
But since ker(A) is 1-dimensional, and the last coordinates of these vectors are
equal, all entries of these vectors are identical. In particular, x
We now claim that x would show that p j 0
could not have been in S after all, a contradiction.
To establish the claim, use equations (2) and (3) to express q 0 as an ane
combination of the vertices of S:
is not in the interior of S, it must be the case that
then by (4), q 0 is on a facet of S. This means that it is spanned
by d vertices on the facet F of P . Thus the vertices of S must include those d
vertices but by Lemma were not on the facet
F (because S is non-degenerate), we obtain a contradiction since S cannot contain
more than d vertices. Thus the equality x
as desired.
Lemma 9. Any d-simplex S spanned by vertices of P contains no more than one
pebble in Q
F .
Proof. Since S \ F is a simplex in F that contains at most one point of QF , then
by Lemma 7, S can contain at most one point of Q
F .
Lemma 10. Any d-simplex S spanned by vertices of P cannot contain pebbles of
F simultaneously.
Proof. Suppose S contained a point q
i of Q
F . Then by Lemma 7, S \ F contains
q i of QF . Since QF was a pebble set for the facet F , S \F cannot also contain q 0 .
If S also contained a pebble p i 0 of Q, then by Lemma 6, S contains v i 0 as a
vertex. Since S \ F contains q i which is interior to a chamber of F , S must also
contain d vertices of F . Since q 0 is in F (but not in S \ F ), q 0 is expressible as
a linear combination (but not convex combination) of those d vertices. This linear
combination, when substituted for q 0 in (2), would show that the pebble p i 0 is not
a convex combination of v i 0 and those d vertices. This contradicts the fact that p i 0
was in S to begin with.
Together, the three lemmas above show that S cannot contain more than one
point of Q [ Q
F , which concludes the proof of Theorem 5.
Together, Proposition 4 and Theorem 5 prove Theorem 1.
A POLYTOPAL SPERNER'S LEMMA 9
3. Graphs for Path-following and Simplicial Polytopes.
Sperner's lemma has a number of constructive proofs which rely on \path-
arguments (see, for example, the survey of Todd [18]). Path-following
arguments work by using a labelling to determine a path through simplices in a
triangulation, in which one endpoint is known and the other endpoint is a full cell.
In this section we adapt these ideas for Sperner-labelled polytopes, which are used
in the next section to give a constructive \path-following" proof of Theorem 1.
Let P be an (n; d)-polytope with triangulation T and a Sperner labelling using
the label set ng. We dene some further terminology and notation
that we will use from now on. Let L(), the label set of , denote the set of distinct
labels of vertices of . Let L(F ) denote the label set of a face F of P . As dened
earlier, a d-simplex in T is a full cell if the vertex labels of are all distinct.
Similarly, a (d 1)-simplex in T is a full facet if the vertex labels of are all
distinct. Note that a full facet on the boundary of P can be regarded as a full cell
in that facet.
Denition. Given a Sperner-labelled triangulation T of a polytope P , we dene
three useful graphs:
1. The nerve graph G is a graph with nodes that are simplices of T whose label
set is of size at least d. Formally, is a node of G if jL()j d. Two nodes
in G are adjacent if (as simplices) one is a face of the other.
2. If K is a subset of the label set ng of size (d 1), the derived
graph GK is the subgraph of the nerve graph G consisting of nodes in G whose
label sets contain K.
3. Let G 0 denote the full cell graph, whose nodes are full cells in the nerve graph
G. Two full cells ; are adjacent in G 0 if there exists a path from to in
G that does not intersect any other full cell. If
G is a connected component
of G, construct the full cell graph
similarly.
Thus the nodes of G and GK are either full cells, full facets, or d-simplices with
exactly one repeated label. The full cell graph G 0 only has full cells as nodes.
Example. The pentagon in Figure 4 has dimension 2. Let
of cardinality (d 1). Then the derived graph GK consists of 1-simplices and
2-simplices that are darkly shaded, and it is a subgraph of the nerve graph G
consisting of the dark and light-shaded 1-simplices and 2-simplices in Figure 4. In
Figure
4, G 0 is a 3-node graph with nodes A; B; C, the full cells. In G 0 , A is adjacent
to B, and B is adjacent to C, but A is not adjacent to C.
As the example shows, the nerve graph G branches in (d directions at full
cells, while the derived graph GK is the subgraph consisting of paths or loops that
\follow" the labels of K along the boundary of the simplices in G. We prove these
assertions.
Lemma 11. The nodes of the derived graph GK are either of degree 1 or 2 for
any K of size d 1. A node is of degree 1 if and only if is a full facet on the
boundary of P . Hence GK is a graph whose connected components are either loops
or paths that connect pairs of full facets on the boundary of P .
Proof. Recall that each node of GK has a label set containing K and is either a
full cell, full facet, or d-simplex with exactly one repeated label.
J. A. DE LOERA, E. PETERSON, AND F. E. SU134453521
Figure
4. A triangulated (5; 2)-polytope (a pentagon) with
Sperner labelling. If nodes of GK consist of the dark-
shaded simplices, nodes of G consist of dark and light-shaded sim-
plices, and nodes of G 0 consist of the three full cells marked by
If is a full cell, since we see that L() consists of labels in K and
two other labels l 1 ; l 2 . There are exactly two facets of whose label sets contain
K; these are the full facets with label sets K [ l 1 and K [ l 1 , respectively. Thus
has degree 2.
If is a full facet with label set containing K, then it is the face of exactly two
d-simplices, unless is on the boundary of P , in which case it is the face of exactly
one d-simplex. Thus is degree 1 or 2 in GK , and degree 1 when is a full facet
on the boundary of P .
If is a d-simplex with exactly one repeated label, then it must possess exactly
two full facets. Since K L(), these full facets must also have label sets that
contain K. Hence these two full facets are the neighbors of in GK , so has
degree 2.
Lemma 12. The nodes of the nerve graph G are of degree 1, 2, or d + 1. A node
is of degree 1 if and only if is a full facet on the boundary of P . A node is
of degree d only if is a full cell.
Proof. As noted before, each node of G is either a full cell, full facet, or d-simplex
with exactly one repeated label. The arguments for the latter two cases are identical
to those in the proof of Lemma 11 by letting K be the empty set.
If is a full cell, then every facet of is a full facet, hence the degree of is
G.
The nerve graph G may have several components, as in Figure 4. In Theorem 16,
we will establish an interesting relation between the labels carried by a component
G and the number of full cells it carries. First we show that all the labels in a
component are carried by the full cells.
A POLYTOPAL SPERNER'S LEMMA 11
Lemma 13. If is adjacent to in G, then L() L(), unless is a full cell,
in which case L() L(). Thus adjacent nodes in
G carry exactly the same labels
unless one of them is a full cell.
Proof. Suppose is a d-simplex with exactly one repeated label. Then it is adjacent
to two full facets with exactly the same label set, so the conclusion holds.
Otherwise, if is a full facet, then it is adjacent to two d-simplices that contain
it as a facet. Hence L() L() for adjacent to .
Finally, if is a full cell, any simplex adjacent to in G is contained in as
a facet, so L() L() in that case.
Lemma 14. Suppose
G is connected component of G. If
G contains at least one
full cell as a node, then all the labels in
G are carried by its full cells.
For example, in Figure 4, G has two components. One of them has no full cells.
In the other component, all of its labels f1; 2; 3; 4; 5g are carried by its full cells
Proof. Since
G is connected and contains at least one full cell, each simplex that
is not a full cell is connected to a full cell via a path in
G that does not intersect
any other full cell in
G. Call this path g. By Lemma 13,
Therefore labels carried by the full cells
contain all labels carried by any other node of the graph.
Since the label information in a nerve graph is found in its full cells, it suces
to understand how the full cells connect to each other.
Lemma 15. Any two adjacent nodes in G 0 are full cells in T whose label sets
contain at least d labels in common.
Proof. Let 1 and 2 be adjacent nodes in G 0 . By construction they must simplices
connected by a path in G; let be any such node along this path. Repeated
application of Lemma 13 yields L()
contains at least the d labels in L( ).
We will say the full cell graph G 0 is a fully d-labelled graph because it clearly
satises four properties:
(a) all nodes in the graph are assigned (d (simply assign to a node of
G 0 the label set L()),
(b) all edges are assigned d labels (assign an edge the d labels specied
in Lemma 15),
(c) the label set of an edge (; ) (denoted by L(; )) is contained in L() \ L( ),
and
are nodes each adjacent to , then L(; )
Proposition 16. Suppose G 0 is a connected fully d-labeled graph. Let L(G 0 ) denote
the set of all labels carried by simplices in G 0 and jG 0 j the number of nodes in G 0 .
Then
We shall use this theorem for graphs G 0 arising as a full cell graph of one connected
component of a nerve graph G. In Figure 4, the full cell graph G 0 has just
one connected component, and L(G 0 indeed 3 5 2.
J. A. DE LOERA, E. PETERSON, AND F. E. SU
Proof. We induct on jG 0 j. If jG the one full cell in G 0 has d labels. Hence
so the statement holds.
We now assume the statement holds for fully d-labeled graphs with less than j
nodes, and show it holds for fully d-labeled graphs G 0 with jG
has j full cells. We claim that it is possible to remove a vertex v from G 0 and leave
connected. This is true because G 0 contains a maximal spanning tree, and the
removal of any leaf from this tree will leave the rest of the nodes in G 0 connected
by a path in this tree.
Now G 0 with v and all its incident edges removed is a new graph (denoted by
nodes. Note that this new graph is still fully d-labeled, so by the
inductive hypothesis, jG 0 vj jL(G 0 v)j d.
v has at least
d labels in common with some vertex in G 0 v, by Lemma 15. Hence jG
Adding 1 to both sides gives the desired conclusion.
This will prove the following useful result.
Theorem 17. Let T be a Sperner-labelled triangulation of an (n; d)-polytope P .
If the nerve graph G has a component
G that carries all the labels of G, then T
contains at least (n d) full cells.
Proof. Use
G to construct the full cell graph
G 0 as above, which is a fully d-labelled
graph. Note that if
G is connected then
G 0 is also connected. By Lemma 14,
Using Proposition 16, we have
which shows there are at least (n d) full cells in
G, and hence in G itself.
Thus to prove Atanassov's conjecture for a given (n; d)-polytope it suces to
nd some component
G of the nerve graph G for which L(
This is the
central idea of the proofs in the next sections.
We now use path-following ideas to outline a proof of Atanassov's conjecture in
the special case where the polytope is simplicial. This will motivate the proof of
Theorem 1 for arbitrary (n; d)-polytopes in the subsequent section.
Theorem 18. If P is a simplicial polytope, there is some component
G of the nerve
graph G which meets every facet of P , and hence carries all labels of G.
Proof. Let F be a simplicial facet of the polytope P . Let (G; F ) count the number
of nodes of G that are simplices in F . This may be thought of as the number of
endpoints of paths in G that terminate on the facet F .
Consider two \adjacent" whose intersection is a ridge of
the polytope P spanned by (d 1) vertices of P . These vertices have distinct
labels; let K be their label set. The derived graph GK consists of loops or paths
whose endpoints in GK must be full facets in F 1 or F 2 , since the Sperner labelling
guarantees that no other facet of P has a label set containing K.
Since every facet of P is simplicial, all the full facets in F 1 and F 2 contain K in
their label set. Thus all the nodes of G that are full facets in F 1 and F 2 must also
be nodes in the graph GK . Since GK is a subgraph of G and consists of paths with
endpoints that pair up full facets in F 1 and F 2 , we see that (G; F 1
mod 2. In fact, since paths in GK are connected, this argument shows that
G;
G;
A POLYTOPAL SPERNER'S LEMMA 13
for any connected component
G of G.
were arbitrary, the same argument holds for any two adjacent
facets. This yields the somewhat surprising conclusion that the parity of (
G; F )
is independent of the facet F . We denote this parity by (
G). Since (G; F ) is also
independent of facet, we can dene (G) similarly.
Since (G; F ) is the sum of (
G; F ) over all connected components
G of G,
it follows that (G)
connected components
G of G.
Moreover, because the usual Sperner's lemma applied to (any)
simplicial shows that there are an odd number of full facets of T in the
Hence there must be some
G such that (
G meets
every facet of P . Because the facets of P are simplicial,
G carries every label, i.e.,
Theorem 19. Any Sperner-labelled triangulation of a simplicial (n; d)-polytope
must contain at least (n d) full cells.
Proof. This follows immediately from Theorem 18 and Theorem 17.
To extend this proof for non-simplicial polytopes requires some new ideas but
follows the basic pattern: (1) nd a function that counts the number of times a
component
G of G meets a certain facet in a certain way, and show that this function
only depends on
G, and (2) appeal to the usual Sperner's lemma for simplices in
a lower dimension to constructively show that the parity of summed over all
components
G must be odd. For the non-simplicial case, we cannot guarantee that
any faces of P except those in dimension 1 are simplicial. How to connect dimension
1 to dimension d is tackled in the next section, and the
ag graph introduced there
gives a constructive procedure for nding certain full cells. Then we construct a
counting function to show that there are at least (n d) full cells for an (n; d)-
polytope.
4. The Flag Graph and Arbitrary Polytopes.
Throughout this section, let the symbol denote equivalence mod 2. Recall
that L(F ) denotes the label set of a face F . Let F denote a
ag of the polytope
a choice of faces F 1 F 2 ::: F d where F i is an i-face of P . When the
choice of F i is not understood by context, we refer to the i-face of a particular
ag
F by writing F i (F).
Given a
ag F , it will be extremely useful to construct \super-paths" containing
simplices of P of various dimensions whose endpoints are either on a 1-dimensional
edge or a d-dimensional full cell.
Denition. Let P be an (n; d)-polytope with a Sperner-labelled triangulation T .
Let F be a
ag of P . We dene the
ag graph GF in the following way. For
d, a k-simplex 2 T is a node in the graph GF if and only if is one of
four types:
(I). the k-simplex is carried by the k-face F k and
(II). the k-simplex is carried by the 1)-face F k+1 and
14 J. A. DE LOERA, E. PETERSON, AND F. E. SU
(III). the k-simplex is carried by the k-face F k and
(IV). the k-simplex is carried by the k-face F k and
there is an I such that
Two nodes are adjacent in GF if (as simplices) one is a facet of the other and at
least one of the pair is of type (I) or (II).
Note that if is a type (I) simplex in GF , then it is a \non-degenerate" full cell
of the k-face that it is carried in, i.e., the vertices of P corresponding to the labels
in L() span a k-dimensional simplex. A type (II) simplex is a non-degenerate full
facet in the 1)-face that it is carried in. A type (III) simplex has just one
repeated label and satises a certain kind of non-degeneracy (that ensures its two
full facets are non-degenerate). A type (IV) simplex is one kind of degenerate full
cell in the k-face that it is carried in (but such that it has exactly two facets which
are non-degenerate).422
admits labels 1,2,3,4,5,6,7
F 2admits labels 1,2,3,4,5
F
Figure
5. A path in the
ag-graph of a (7; 3)-polytope. The gure
at left shows simplices along a path in the triangulation. Simplices
carried by F 2 are shaded. The gure at right shows the label
sets of the simplices along this path. Simplices 1 ; :::; 7 occur in
counterclockwise order along this path.
Example. Let P be a (7; 3)-polytope P , i.e., a 3-dimensional polytope with 7
vertices, and suppose T is a Sperner-labelled triangulation of P . Let F 1 F 2 F 3
be a
ag F of P with label sets f1; 2g f1; 2; 3; 4; 5g f1; 2; :::; 7g, respectively.
Consider the following collection of simplices shown in Figure 5. Let
be simplices with label sets: L( 1 (with repeated label 2),
6g such that in each pair f i ; i+1 g, one is a facet of the other. The simplices
are carried by the face F 2 and others are carried by the face F 3 . Each of
these simplices is a node in the graph
are of type (II), 2 is of type (III), and 5 is of type (IV). Furthermore, each pair
A POLYTOPAL SPERNER'S LEMMA 15
i and i+1 are adjacent in GF . Except for 1 and 7 , each of these simplices has
exactly 2 neighbors in GF so the above sequence traces out a path.
The following result shows that GF does, in fact, consist of a collection of loops
or paths whose endpoints are either 1-dimensional and d-dimensional.
Lemma 20. Every node of GF has degree 1 or 2, and has degree 1 only when
is a 1-simplex or a d-simplex in GF .
Proof. Consider a k-simplex of type (I). If k 2, then has a facet determined
by the k labels in L() \L(F k 1 ), and this facet is a (k 1)-simplex of type (I) or
so it is adjacent to . No other facets of are types (I)-(IV). If k d 1, then
is a facet of exactly one (carried by F k+1 ) that must be of type
(I) or (III) or (IV). Thus a type (I) simplex has degree 2 unless d, in
which case it has degree 1.
In case (II), the k-simplex is the facet of exactly two 1)-simplices in F k+1 ;
these are either of type (I) or (III) or (IV) and are thus neighbors of in GF .
Facets of are not types (I)-(IV) because they are co-dimension 2 in the face F k+1 .
Thus type (II) vertices have degree 2.
In case (III), the k-simplex has exactly two facets determined by the k labels
in L(); each of these is a (k 1)-simplex adjacent to in GF because it is either
of type (I) in F k 1 or of type (II) in F k . No other facets of are types (I)-(IV).
Thus type (III) vertices have degree 2.
In case (IV), the labelling rules show that the set L() \ (L(F I ) n L(F I 1 )) is of
size two. Call these labels a and b. There is exactly one facet of that omits the
label a and one facet which omits the label b; each of these is a (k 1)-simplex of
type (I) in F k 1 or of type (II) in F k , so is adjacent to in GF . No other facets
of are non-degenerate; therefore cannot be types (I) or (II). Thus type (IV)
vertices have degree 2.
We remark that in the denition of the
ag graph, we require at least one of an
adjacent pair to be of type (I) or (II) because without this restriction, some type
vertices could have degree greater than 2. For instance, in a Sperner-labelled,
triangulated (9; 4)-polytope, suppose F 1 F 2 F 3 F 4 is a
ag F of P with
label sets f1; 2g f1; 2; 3; 4; 5g f1; 2; :::; 7g f1; 2; :::; 9g, respectively. If is a
4-simplex in F 4 with label set f1; 2; 3; 4; 6g such that its face with labels f1; 2; 3; 4g
is carried in F 3 , then both and are of type (IV). They each already have two
facets in GF of type (I) or (II), so we would not want to dene them to be adjacent
to each other.
Theorem 21. A Sperner-labelled triangulation of an (n; d)-polytope contains, for
each edge F 1 of P , a non-degenerate full cell whose labels contain L(F 1 ).
Proof. For any
ag F containing the edge F 1 , Lemma 20 shows that GF consist of
loops or paths whose endpoints are non-degenerate full cells in F 1 or F d ; thus the
total number of such end points full cells in F 1 and in F d must be of the same parity.
On the other hand, the 1-dimensional Sperner's lemma shows that the number of
full cells in F 1 is odd. So the number of non-degenerate full cells in F d in GF must
be odd. In particular there is at least one non-degenerate full cell in F d whose label
set contains L(F 1 ).
J. A. DE LOERA, E. PETERSON, AND F. E. SU
Notice that the above proof is constructive; the graph GF yields a method for
locating a non-degenerate full cell for any choice of
ag F , by starting at one of the
full cells on the edge F 1 (an odd number of them are available). At most an even
number of them are matched by paths in GF , so at least one of them is matched
by a path to a non-degenerate full cell in F d .
However, as we show now, more can be said about the location of full cells.
Rather than locating all of them at the endpoints of paths in a
ag-graph, we can
show that there is some component
G of the nerve graph G that contains at least
(n d) full cells. One can trace paths through this component to nd them. We
nd a component
G of G that carries all labels. Theorem 16 will imply that the
component must have (n d) full cells. As in the case for simplicial polytopes, the
key rests on dening a function that counts the number of times that
G meets
a facet in a certain way, and then showing that the parity of exhibits a certain
kind of invariance| it really only depends on
G. Any component with non-zero
parity will be the desired component.
Denition. Suppose F is a facet of P and R is a ridge of P that is a facet of F .
Let (G; F; R) denote the number of nodes of the nerve graph G in the facet F
such that jL() \
Similarly if K is any (d 1)-subset of L(F ), let (G; F; K) denote the number
of nodes of the graph G in the facet F such that jL() \
If
G is a connected component of G, dene (
G; F; R) and (
G; F; K) similarly
using
G instead of G.
Thus (G; F; R) (resp. (
G; F; R)) counts non-degenerate full cells of type (I)
from GF (resp.
GF ) in the facet F , for all
ags F of P such that
It is easy to show that the parity of (G; F; R) is independent
of both F and R:
Theorem 22. Given any
ag F of P , suppose
Then
(G; F; R) 1:
Proof. Consider the subgraph contains simplices of dimension (d 1) or lower. This
subgraph must be a collection of loops or paths (since GF is) whose endpoints (even
number of them) are non-degenerate full cells in either F 1 or F d 1 . But Sperner's
lemma in 1-dimension (or simple inspection) shows that the number of full cells in
must be odd. Hence the number of non-degenerate full cells of GF that meet
must be odd as well.
The next two theorems show that for connected components
G of G, the parity
of (
G; F; R) is also independent of F and R. This fact does not follow directly
from Theorem 22 since we do not know that endpoints of GF for dierent
ags are
connected in
G. To establish this we need to trace connected paths in the nerve
graph G rather than the
ag graph GF .
Lemma 23. Let
G be a connected component of G. Suppose that R; R 0 are ridges
of P and F a facet of P such that R; R 0 are both facets of F . Then
G; F; R) (
G; F; R 0
A POLYTOPAL SPERNER'S LEMMA 17
Proof. First assume that R and R 0 are \adjacent" ridges sharing a common facet
C (it has dimension d 3, if you are keeping track). We claim that
A;x
G;
where x runs over all labels in L(F ) that are not in L(C) and A runs over all (d 2)-
subsets of L(C). (Here we write A[ x instead of A[ fxg to reduce notation.) The
above sum holds because it only counts fully-labelled simplices from
G in F that
contain a (d 2)-subset A of L(C), and every such appears exactly twice in this
counted once each in (
G; F; A [ a) and in
G; F; A [ b).
On the other hand, if x is not the label set of any ridge of P , then any
fully-labelled simplex on the boundary of P that contains K must be contained in
the facet F . Since K is of size d 1, by Lemma 11, we see that (
G; F; K) 0
because there is an even number of endpoints of paths in GK , and such paths are
connected subgraphs of the connected graph
G.
Thus the only terms surviving the above sum correspond to label sets of the two
ridges R; R 0 that are facets of F and share a common face C, i.e.,
G;
G;
which yields the desired conclusion for neighboring ridges R, R 0 .
Since any two ridges of a facet F are connected by a chain of adjacent ridges,
the general conclusion holds.
Lemma 24. Let
G be a connected component of G. Let F; F 0 be adjacent facets of
the polytope P bordering on a common ridge R. Then
G; F; R) (
G;
Proof. Let R denote the ridge common to both F and F 0 . Let K be any non-degenerate
subset of L(R) of size (d 1), i.e., K is not a subset of F i for i <
(d 2). Consider the derived graph GK . By Lemma 11, this graph consists of
paths connecting full cells from
G on the boundary of P that contain the label set
K. Since these paths are connected subgraphs of
G, there is an even number of
endpoints of these paths in
G.
On the other hand, because of the Sperner labelling, all such endpoints must lie
in facets of P that contain R. There are exactly two such facets, F and F 0 . Hence
G;
G;
which produces the desired conclusion.
Theorem 25. Let
G be a connected component of G. The parity of (
G; F; R) is
independent of F and R.
Proof. Since all facet-ridge pairs (F; R) are connected by a sequence of adjacent
facets and ridges, the statement follows from Lemmas 23 and 24.
Hence we may dene the parity of
G to be (
G; F; R) for any facet-ridge
Similarly, dene the parity of G to be (G) (G; F; R) for any facet-
ridge pair (F; R), which is well-dened and equal to 1 in light of Theorem 22. Now
we may prove
J. A. DE LOERA, E. PETERSON, AND F. E. SU
Theorem 26. If P is an (n; d)-polytope, there is some component
G of G which
carries all labels of P .
Proof. Fix some
ag F of P , and let
(G; F; R) is the sum of (
G; F; R) over all connected components
G of G, it follows
that (G)
connected components
G of G. Moreover, Theorem
22 shows that (G) 1.
Hence there must be some
G such that (
G carries the labels
in L(R). Since the
ag F was arbitrary,
G must carry all labels of P .
This concludes our alternate\path-following" proof of Theorem 1, because the
(n d) count follows immediately from Theorems 26 and 17, while the covering
property follows (as before) from Proposition 4.
5. conclusion
As applications of Theorem 1, we prove the corollaries mentioned in the intro-
duction, and make some further remarks.
Corollary 2 follows directly from the degree and covering arguments in the non-constructive
proof of Section 2 and the fact that stacked polytopes have triangulations
of size (n d) [11].
Corollary 3, proved below, is stronger than the version stated in [10], which
does not mention the covering property of the full cells, nor their cardinality. That
weaker version follows also from the combinatorial results of Freund in [6] (in particular
his Theorem 4). In fact we should note the similar
avor of Freund's results
to our theorem although he considers triangulations of polytopes with the number
of labels equal to the number of facets (not vertices). His results seem to imply a
\dual" version of our polytopal Sperner but without an estimate of how many full
cells exist.
Proof of Corollary 3. Let C j , be the closed sets in the statement. Consider
an innite sequence of triangulations T k of the polytope P with the property
that the maximal diameter of their simplices tends to zero as k goes to innity. For
each triangulation, we label a vertex y of T k with ng j y 2
g. This is clearly a Sperner labelling.
By Theorem 1, each triangulation T k targets a collection of simplices of P corresponding
to full cells in T k . There are only nitely many possible collections (since
they are subsets of the set of all simplices of P ), and because there are innitely
many T k , some collection C of simplices must be targeted innitely many times
by a subsequence T k i
of T k . By Theorem 1, this collection C is a cover of P and
therefore has at least c(P ) elements.
For each simplex in C, choose one full cell i in T k i
that shares the same label
set. The i form a sequence of triangles decreasing in size. By the compactness of
subsequence of these triangles converges to a point, which (by the labelling
rule) must be in the intersection of the closed sets C j with j 2 L().
Thus given a point choose any simplex of C that contains P (since C
is a cover of P ), and let J L(). Then the above remarks show that J p satises
the conditions in the conclusion of the theorem. Moreover, there are at least c(P )
dierent such subsets, one for each in C.
We remark that in the statement of Theorem 1, the full cells not only correspond
to a cover of the polytope P but, in fact, to a face-to-face cover. While this is not
A POLYTOPAL SPERNER'S LEMMA 19
apparent from the rst proof of Theorem 1, it may be seen from the second proof;
in particular, the adjacencies in full cell graph G 0 indicate how the simplices of the
cover meet face-to-face. Thus the subsets in Corollary 1 also satisfy this property.
We close with a couple of questions. For a specic polytope P , dene the pebble
number p(P ) to be the size of its largest pebble set. The (n d) lower bound
of Theorem 1 is tight, achieved by stacked polytopes whose vertices are assigned
dierent labels. But for a specic polytope P , the arguments of Section 2 show
that the lower bound (n d) can be improved to p(P ). What can be said about
the value of p(P )?
We can provide at least two upper bounds for this number. On one hand p(P )
c(P ), because for a maximal pebble set, at most one pebble lies in each simplex of
a minimal size cover. On the other hand, consider the simplex-chamber incidence
introduced in [1]. As the columns correspond to chambers a pebble
selection is essentially a selection of a \row-echelon" submatrix; therefore the rank
of M is an upper bound on the size of pebble sets. Our pebble construction gives
an algorithm for selecting an explicit independent set of columns of M (although
this may not always be a basis).
A related question is: for a specic polytope P , how can one determine the
minimal cover size c(P )? Although Corollary 2 gives a general sharp lower bound
for all polytopes, we know that sometimes minimal covers are much larger for
specic polytopes, such as for cubes (as the volume arguments in [15] show). Also
note that the minimal cover may be strictly smaller than the minimal triangulation
(an example is contained in [3]).
Finding other explicit constructions of pebble sets (besides our \facet-pivoting"
construction of Section 2) that work for specic polytopes may shed some light on
these questions.
--R
Combinatorial bases in systems of simplices and chambers
Hungarica
Minimal simplicial dissections and triangulations of convex 3-polytopes
On the Sperner lemma
Combinatorial Analogs of Brouwer's Fixed-Point Theorem on a Bounded Polyhedron
Ein Beweis des Fixpunktsatzes f
Simplicial Approximation of Fixed Points
Intersection theorems on polytopes
On triangulations of the convex hull of n points
The Approximation of Fixed Points of a Continuous Mapping
Seifert and Threlfall: A Textbook of Topology
A lower bound for the simplexity of the n-cube via hyperbolic volumes
Neuer Beweis f
Sperner's lemma in fair division
--TR
Combinatorial analogs of Brouwer''s fixed-point theorem on a bounded polyhedron
Duality and minors of secondary polyhedra
Combinatorial bases in systems of simplices and chambers
A lower bound for the simplexity of the <italic>n</italic>-cube via hyperbolic volumes
Extremal Properties for Dissections of Convex 3-Polytopes
--CTR
Frdric Meunier, Sperner labellings: a combinatorial approach, Journal of Combinatorial Theory Series A, v.113 n.7, p.1462-1475, October 2006
Timothy Prescott , Francis Edward Su, A constructive proof of Ky Fan's generalization of Tucker's lemma, Journal of Combinatorial Theory Series A, v.111 n.2, p.257-265, August 2005 | sperner's lemma;path-following;polytopes;simplicial algorithms |
636696 | An Efficient Protocol for Authenticated Key Agreement. | This paper proposes an efficient two-pass protocol for authenticated key agreement in the asymmetric (public-key) setting. The protocol is based on Diffie-Hellman key agreement and can be modified to work in an arbitrary finite group and, in particular, elliptic curve groups. Two modifications of this protocol are also presented: a one-pass authenticated key agreement protocol suitable for environments where only one entity is on-line, and a three-pass protocol in which key confirmation is additionally provided. Variants of these protocols have been standardized in IEEE P1363 [17], ANSI X9.42 [2], ANSI X9.63 [4] and ISO 15496-3 [18], and are currently under consideration for standardization and by the U.S. government's National Institute for Standards and Technology [30]. | Introduction
Key establishment is the process by which two (or more) entities establish a shared secret key.
The key is subsequently used to achieve some cryptographic goal such as confidentiality or data
integrity. Broadly speaking, there are two kinds of key establishment protocols: key transport
protocols in which a key is created by one entity and securely transmitted to the second entity,
and key agreement protocols in which both parties contribute information which jointly establish
the shared secret key. In this paper, we shall only consider key agreement protocols for the
asymmetric (public-key) two-entity setting.
Let A and B be two honest entities, i.e., legitimate entities who execute the steps of a
protocol correctly. Informally speaking, a key agreement protocol is said to provide implicit key
authentication (of B to A) if entity A is assured that no other entity aside from a specifically
identified second entity B can possibly learn the value of a particular secret key. Note that the
property of implicit key authentication does not necessarily mean that A is assured of B actually
possessing the key. A key agreement protocol which provides implicit key authentication to both
participating entities is called an authenticated key agreement (AK) protocol.
Informally speaking, a key agreement protocol is said to provide key confirmation (of B to
is assured that the second entity B actually has possession of a particular secret
key. If both implicit key authentication and key confirmation (of B to are provided, then
the key establishment protocol is said to provide explicit key authentication (of B to A). A key
agreement protocol which provides explicit key authentication to both participating entities is
called an authenticated key agreement with key confirmation (AKC) protocol. For an extensive
survey on key establishment, see Chapter 12 of Menezes, van Oorschot and Vanstone [27].
Extreme care must be exercised when separating key confirmation from implicit key authen-
tication. If an AK protocol which does not offer key confirmation is used, then, as pointed out
in [10], it is desirable that the agreed key be confirmed prior to cryptographic use. This can be
done in a variety of ways. For example, if the key is to be subsequently used to achieve confiden-
tiality, then encryption with the key can begin on some (carefully chosen) known data. Other
systems may provide key confirmation during a 'real-time' telephone conversation. Separating
key confirmation from implicit key authentication is sometimes desirable because it permits flexibility
in how a particular implementation chooses to achieve key confirmation, and thus moves
the burden of key confirmation from the establishment mechanism to the application.
In this paper, we propose a new and efficient two-pass AK protocol. The protocol is based
on Diffie-Hellman key agreement [14], and has many of the desirable security and performance
attributes discussed in [10] (see x2). Two modifications of this protocol are also presented: a
one-pass AK protocol suitable for environments where only one entity is on-line, and a three-pass
protocol in which key confirmation is additionally provided.
The protocols described in this paper establish a shared secret K between two entities. A
derivation function should then be used to derive a secret key from the shared secret. This
is necessary because K may have some weak bits - bits of information about K that can be
predicted correctly with non-negligible advantage. One way to derive a secret key from K is to
apply a one-way hash function such as SHA-1 [28] to K. With the exception of Protocol 3 in
x6.2, this paper does not include key derivation functions in protocol descriptions.
All protocols described in this paper have been described in the setting of the group of points
on an elliptic curve defined over a finite field. However, they can all be easily modified to work
in any finite group in which the discrete logarithm problem appears intractable. Suitable choices
include the multiplicative group of a finite field, subgroups of Z
n where n is a composite integer,
and subgroups of Z
p of prime order q. Elliptic curve groups are advantageous because they offer
equivalent security as the other groups but with smaller key sizes and faster computation times.
The remainder of the paper is organized as follows. x2 discusses the desirable attributes of
AK and AKC protocols. x3 describes the elliptic curve parameters that are common to both
entities involved in the protocols, public keys, and methods for validating them. x4 reviews the
MTI protocols, and describes some active attacks on them. These attacks influenced the design
of the new two-pass AK protocol, which is presented in x5. x6 presents the one-pass variant of
the protocol, and the three-pass variant which provides explicit key authentication. Finally, x7
makes concluding remarks.
Desirable attributes of AK and AKC protocols
Numerous Diffie-Hellman-based AK and AKC protocols have been proposed over the years; how-
ever, many have subsequently been found to have security flaws. The main problems were that
appropriate threat models and the goals of secure AK and AKC protocols lacked formal defini-
tion. Blake-Wilson, Johnson and Menezes [10], adapting earlier work of Bellare and Rogaway [9]
in the symmetric setting, provided a formal model of distributed computing and rigorous definitions
of the goals of secure AK and AKC protocols within this model. Concrete AK and AKC
protocols were proposed, and proven secure within this framework in the random oracle model
[8]. This paper follows their definitions of goals of secure AK and AKC protocols (described
informally in x1) and their classification of threats.
A secure protocol should be able to withstand both passive attacks (where an adversary
attempts to prevent a protocol from achieving its goals by merely observing honest entities
carrying out the protocol) and active attacks (where an adversary additionally subverts the
communications by injecting, deleting, altering or replaying messages). In addition to implicit
authentication and key confirmation, a number of desirable security attributes of AK and
AKC protocols have been identified (see [10] for a further discussion of these):
1. known-key security. Each run of a key agreement protocol between two entities A and B
should produce a unique secret key; such keys are called session keys. A protocol should
still achieve its goal in the face of an adversary who has learned some other session keys.
2. (perfect) forward secrecy. If long-term private keys of one or more entities are compromised,
the secrecy of previous session keys established by honest entities is not affected.
3. key-compromise impersonation. Suppose A's long-term private key is disclosed. Clearly
an adversary that knows this value can now impersonate A, since it is precisely this value
that identifies A. However, it may be desirable that this loss does not enable an adversary
to impersonate other entities to A.
4. unknown key-share. Entity A cannot be coerced into sharing a key with entity B without
A's knowledge, i.e., when A believes the key is shared with some entity C 6= B, and B
(correctly) believes the key is shared with A.
5. key control. Neither entity should be able to force the session key to a preselected value.
Desirable performance attributes of AK and AKC protocols include a minimal number of
(the number of messages exchanged in a run of the protocol), low communication overhead
(total number of bits transmitted), and low computation overhead. Other attributes that may
be desirable in some circumstances include role-symmetry (the messages transmitted between
entities have the same structure), non-interactiveness (the messages transmitted between the
two entities are independent of each other), and the non-reliance on encryption (to meet export
requirements), hash functions (since these are notoriously hard to design), and timestamping
(since it is difficult to implement securely in practice).
3 Domain parameters and key pair generation
This section describes the elliptic curve parameters that are common to both entities involved
in the protocols (i.e., the domain parameters), and the key pairs of each entity.
3.1 Domain parameters
The domain parameters for the protocols described in this paper consist of a suitably chosen
elliptic curve E defined over a finite field F q of characteristic p, and a base point In
the remainder of this subsection, we elaborate on what "suitable" parameters are, and outline
a procedure for verifying that a given set of parameters meet these requirements.
In order to avoid the Pollard-rho [32] and Pohlig-Hellman [31] algorithms for the elliptic curve
discrete logarithm problem (ECDLP), it is necessary that the number of F q -rational points on E,
denoted #E(F q ), be divisible by a sufficiently large prime n. As of this writing, it is commonly
recommended that n ? 2 160 (see [3, 4]). Having fixed an underlying field F q , n should be
selected to be as large as possible, i.e., one should have n - q, so #E(F q ) is almost prime. In
the remainder of this paper, we shall assume that n ? 2 160 and that n ? 4 p q. By Hasse's
Theorem,
Hence implies that n 2 does not divide #E(F q ), and thus E(F q ) has a unique subgroup
of order n. Also, since
there is a unique integer h such that
namely
To guard against potential small subgroup
attacks (see x4.1), the point P should have order n.
Some further precautions should be exercised when selecting the elliptic curve. To avoid the
reduction algorithms of Menezes, Okamoto and Vanstone [24] and Frey and R-uck [16], the curve
should be non-supersingular (i.e., p should not divide (q generally, one
should verify that n does not divide q large enough so that
it is computationally infeasible to find discrete logarithms in F q C suffices in practice
[3]). To avoid the attack of Semaev [34], Smart [35], and Satoh and Araki [33] on F q -anomalous
curves, the curve should not be F q -anomalous (i.e., #E(F q ) 6= q).
A prudent way to guard against these attacks, and similar attacks against special classes of
curves that may be discovered in the future, is to select the elliptic curve E at random subject
to the condition that #E(F q ) is divisible by a large prime - the probability that a random
curve succumbs to these special-purpose attacks is negligible. A curve can be selected verifiably
at random by choosing the coefficients of the defining elliptic curve equation as the outputs of
a one-way function such as SHA-1 according to some pre-specified procedure. A procedure for
accomplishing this, similar in spirit to the method given in FIPS 186 [29] for selecting DSA
primes verifiably at random, is described in ANSI X9.62 [3].
To summarize, domain parameters are comprised of:
1. a field size q, where q is a prime power (in practice, either an odd prime, or
2. an indication FR (field representation) of the representation used for the elements of F q ;
3. two field elements a and b in F q which define the equation of the elliptic curve E over F q
(i.e., in the case p ? 3, and y in the case
4. two field elements x P and y P in F q which define a finite point of prime order
in E(F q ) (since P is described by two field elements, this implies that P 6= O, where O
denotes the point at infinity);
5. the order n of the point P ; and
6. the cofactor
Domain parameter validation. A set of domain parameters (q; FR; a; b;
can be verified to meet the above requirements as follows. This process is called domain parameter
validation.
1. Verify that q is a prime power.
2. Verify that FR is a valid field representation.
3. Verify that a, b, x P and y P are elements of F q (i.e., verify that they are of the proper
format for elements of F q ).
4. Verify that a and b define a (non-singular) elliptic curve over F q (i.e., 4a 3
the case p ? 3, and b 6= 0 in the case
5. Verify that P satisfies the defining equation of E (and that P 6= O).
6. Verify that n ? 4 p q, that n is prime, and that n is sufficiently large (e.g., n ? 2 160 ).
7. Verify that
8. Compute
verify that
9. To ensure protection against known attacks on special classes of elliptic curves, verify that
n does not divide q 20, and that n 6= q.
3.2 Key pair generation
Given a valid set of domain parameters (q; FR; a; b; h), an entity A's private key is an
integer d selected at random from the interval [1; n \Gamma 1]. A's public key is the elliptic curve point
. The key pair is (Q; d). Each run of a key agreement protocol between two entities
A and B should produce a unique shared secret key. Hence, for the protocols described in this
paper, each entity has two public keys: a static or long-term public key, and an ephemeral or
short-term public key. The static public key is bound to the entity for a certain period of time,
typically through the use of certificates. A new ephemeral public key is generated for each run
of the protocol. A's static key pair is denoted (WA ; wA ), while A's ephemeral key pair is denoted
For the remainder of this paper, we will assume that static public keys are exchanged via
certificates. Cert A denotes A's public-key certificate, containing a string of information that
uniquely identifies A (such as A's name and address), her static public key WA , and a certifying
authority CA's signature over this information. To avoid a potential unknown key-share attack
(see x4.2), the CA should verify that A possesses the private key wA corresponding to her
static public key WA . Other information may be included in the data portion of the certificate,
including the domain parameters if these are not known from context. Any other entity B can
use his authentic copy of the CA's public key to verify A's certificate, thereby obtaining an
authentic copy of A's static public key.
Public-key validation. Before using an entity's purported public key Q, it is prudent to
verify that it possesses the arithmetic properties it is supposed to - namely that Q be a finite
point in the subgroup generated by P . This process is called public-key validation [19]. Given
a valid set of domain parameters (q; FR; a; b; purported public key
be validated by verifying that:
1. Q is not equal to O;
2. xQ and y Q are elements in the field F q (i.e., they are of the proper format for elements of
3. Q satisfies the defining equation of E; and
4.
Embedded public-key validation. The computationally expensive operation in public-key
validation is the scalar multiplication in step 4. For static public keys, the validation could be
done once and for all by the CA. However, since a new ephemeral key is generated for each
run of the protocol, validation of the ephemeral key places a significant burden on the entity
performing the validation. To reduce this burden, step 4 can be omitted during key validation.
Instead, the protocols proposed in x5 and x6 ensure that the shared secret K generated is a finite
point in the subgroup generated by P . To summarize, given a valid set of domain parameters
embedded public-key validation of a purported public key
accomplished by verifying that:
1. Q is not equal to O;
2. xQ and y Q are elements in the field F q (i.e., they are of the proper format for elements of
F q ); and
3. Q satisfies the defining equation of E.
4 The MTI key agreement protocols
The MTI/A0 and MTI/C0 key agreement protocols described here are special cases of the
three infinite families of key agreement protocols proposed by Matsumoto, Takashima and Imai
[23] in 1986. They were designed to provide implicit key authentication, and do not provide key
confirmation. Closely related protocols are KEA [30] and those proposed by Goss [17] and Yacobi
[37], the latter operating in the ring of integers modulo a composite integer. Yacobi proved that
his protocol is secure against certain types of known-key attacks by a passive adversary (provided
that the composite modulus Diffie-Hellman problem is intractable). However, Desmedt and
Burmester [13] pointed out that the security is only heuristic under known-key attack by an
active adversary.
This section illustrates that the MTI/A0 and MTI/C0 families of protocols are vulnerable to
several active attacks. The small subgroup attack presented in x4.1 illustrates that the MTI/C0
protocol (as originally described) does not provide implicit key authentication. The unknown
attack presented in x4.2 illustrates that the MTI/A0 protocol does not possess the
unknown key-share attribute in some circumstances. Other active attacks on AK protocols are
discussed by Diffie, van Oorschot and Wiener [15], Burmester [11], Just and Vaudenay [20], and
Lim and Lee [22].
4.1 Small subgroup attack
The small subgroup attack was first pointed out by Vanstone [26]; see also van Oorschot and
Wiener [36], Anderson and Vaudenay [1], and Lim and Lee [22]. The attack illustrates that authenticating
and validating the static and ephemeral keys is a prudent, and sometimes essential,
measure to take in Diffie-Hellman AK protocols. We illustrate the small subgroup attack on the
MTI/C0 protocol. The protocol assumes that A and B a priori possess authentic copies of each
other's static public keys.
MTI/C0 AK protocol.
1. A generates a random integer r A , computes the point
sends to B.
2. B generates a random integer r computes the point
sends TB to A.
3. A computes
A r
4. B computes
5. The shared secret is the point K.
The small subgroup attack can be launched if the order n of the base point P is not prime;
say, . The attack forces the shared secret to be one of a small and
known subset of points.
A small subgroup attack on the MTI/C0 protocol.
1. E intercepts A's message replaces it with
2. E intercepts B's message TB and replaces it with mTB .
3. A computes
A r A (mTB
4. B computes
K lies in the subgroup of order t of the group generated by P , and hence it takes on one of only
possible values. Since the value of K can be correctly guessed by E with high probability, this
shows that the MTI/C0 protocol does not provide implicit key authentication. The effects of
this can be especially drastic because a subsequent key confirmation phase may fail to detect
this attack - E may be able to correctly compute the messages required for key confirmation.
For example, E may be able to compute on behalf of A the message authentication code (MAC)
under K of the message consisting of the identities of A and B, the ephemeral point
purportedly sent by A, and the point sent by B.
The small subgroup attack in MTI/C0 can be prevented, for example, by mandating use of
a base point P of prime order, and requiring that both A and B perform key validation on the
ephemeral public keys they receive.
1 In this case, static private keys w should be selected subject to the condition
4.2 Unknown key-share attack
The active attack on the MTI/A0 protocol described in this subsection was first pointed out
by Menezes, Qu and Vanstone [25]. The attack, and the many variants of it, illustrate that
it is prudent to require an entity A to prove possession of the private key corresponding to
its (static) public key to a CA before the CA certifies the public key as belonging to A. This
proof of possession can be accomplished by zero-knowledge techniques; for example, see Chaum,
Evertse and van de Graaf [12].
MTI/A0 AK protocol.
1. A generates a random integer r A , computes the point
sends (RA ; Cert A ) to B.
2. B generates a random integer r computes the point
sends (RB ; Cert B ) to A.
3. A computes
4. B computes
5. The shared secret is the point K.
In one variant of the unknown key-share attack, the adversary E wishes to have messages
sent from A to B identified as having originated from E herself. To accomplish this, E selects an
integer e, 1 - e - and gets this certified as her public key.
Notice that E does not know the logical private key which corresponds to her
public key (assuming, of course, that the discrete logarithm problem in E(F q ) is intractable),
although she knows e.
An unknown key-share attack on the MTI/A0 protocol.
1. E intercepts A's message (RA ; Cert A ) and replaces it with (RA ; Cert E ).
2. B sends (RB ; Cert B ) to E, who then forwards (eRB ; Cert B ) to A.
3. A computes
4. B computes
A and B now share the secret K even though B believes he shares the secret with E. Note that
does not learn the value of K. Hence, the attack illustrates that the MTI/A0 protocol does
not possess the unknown key-share attribute.
A hypothetical scenario where the attack may be launched successfully is the following; this
scenario was first described by Diffie, van Oorschot and Wiener [15]. Suppose that B is a bank
branch and A is an account holder. Certificates are issued by the bank headquarters and within
each certificate is the account information of the holder. Suppose that the protocol for electronic
deposit of funds is to exchange a key with a bank branch via a mutually authenticated key
agreement. Once B has authenticated the transmitting entity, encrypted funds are deposited
to the account number in the certificate. Suppose that no further authentication is done in
the encrypted deposit message (which might be the case to save bandwidth). If the attack
mentioned above is successfully launched then the deposit will be made to E's account instead
of A's account.
5 The new authenticated key agreement protocol
In this section, the two-pass AK protocol is described. The following notation is used in x5 and
x6. f denotes the bitlength of n, the prime order of the base point
If Q is a finite elliptic curve point, then Q is defined as follows. Let x be the x-coordinate of
Q, and let x be the integer obtained from the binary representation of x. (The value of x will
depend on the representation chosen for the elements of the field F q .) Then Q is defined to be
the integer . Observe that (Q mod n) 6= 0.
5.1 Protocol description
We now describe the two-pass AK protocol (Protocol 1) which is an optimization and refinement
of a protocol first described by Menezes, Qu and Vanstone [25]. It is depicted in Figure 1. In
this and subsequent Figures, A (w A;WA ) denotes that A's static private key and static public key
are wA and WA , respectively. Domain parameters and static keys are set up and validated as
described in x3. If A and B do not a priori possess authentic copies of each other's static public
then certificates should be included in the flows.
RA
Figure
1: Two-pass AK protocol (Protocol 1)
Protocol 1.
1. A generates a random integer r A , computes the point
sends this to B.
2. B generates a random integer r computes the point
sends this to A.
3. A does an embedded key validation of RB (see x3.2). If the validation fails, then A
terminates the protocol run with failure. Otherwise, A computes
and
If terminates the protocol run with failure.
4. B does an embedded key validation of RA . If the validation fails, then B terminates the
protocol run with failure. Otherwise, B computes
and
If terminates the protocol run with failure.
5. The shared secret is the point K.
5.2 Security notes and rationale
Although the security of Protocol 1 has not been formally proven in a model of distributed
computing, heuristic arguments suggest that Protocol 1 provides mutual implicit key authenti-
cation. In addition, Protocol 1 appears to have the security attributes of known-key security,
forward secrecy, key-compromise impersonation, and key control that were listed in x2. Another
security attribute of Protocol 1 is that compromise of the ephemeral private keys (r A and r B )
reveals neither the static private keys (wA and wB ) nor the shared secret K.
Kaliski [21] has recently observed that Protocol 1 does not possess the unknown key-share
attribute. This is demonstrated by the following on-line attack. The adversary E intercepts
A's ephemeral public key RA intended for B, and computes
certified as her static public key (note that E
knows the corresponding private key wE ), and transmits RE to B. B responds by sending RB
to E, which E forwards to A. Both A and B compute the same secret K, however B mistakenly
believes that he shares K with E. We emphasize that lack of the unknown key-share attribute
does not contradict the fundamental goal of mutual implicit key authentication - by definition
the provision of implicit key authentication is only considered in the case where B engages in the
protocol with an honest entity (which E isn't). If an application using Protocol 1 is concerned
with the lack of the unknown key-share attribute under such on-line attacks, then appropriate
confirmation should be added, for example as specified in Protocol 3.
Protocol 1 can be viewed as a direct extension of the ordinary (unauthenticated) Diffie-Hellman
agreement protocol. The quantities s A and s B serve as implicit signatures for A's
ephemeral public key RA and B's ephemeral public key RB , respectively. The shared secret is
rather than r A r BP as would be the case with ordinary Diffie-Hellman.
The expression for RA uses only half the bits of the x-coordinate of RA . This was done in
order to increase the efficiency of computing K because the scalar multiplication RAWA in (4)
can be done in half the time of a full scalar multiplication. The modification does not appear
to affect the security of the protocol. The definition of RA implies that RA 6= 0; this ensures
that the contribution of the static private key wA is not being cancelled in the formation of s A
in (1).
Multiplication by h in (2) and (4) ensures that the shared secret K (see equation (5)) is a
point in the subgroup of order n in E(F q ). The check ensures that K is a finite point.
If Protocol 1 is used to agree upon a k-bit key for subsequent use in a symmetric-key block
cipher, then it is recommended that the elliptic curve be chosen so that n ? 2 2k .
Protocol 1 has all the desirable performance attributes listed in x2. From A's point of view,
the dominant computational steps in a run of Protocol 1 are the scalar multiplications r AP ,
RBWB , and hsA (RB +RBWB ). Hence the work required by each entity is 2:5 (full) scalar mul-
tiplications. Since r AP can be computed off-line by A, the on-line work required by each entity
is only 1:5 scalar multiplications. In addition, the protocol has low communication overhead,
is role-symmetric, non-interactive, and does not use encryption or timestamping. While a hash
function may be used in the key derivation function (to derive a session key from the shared
secret K), the security of Protocol 1 appears to be less reliant on the cryptographic strength of
the hash function that some other AK protocols (such as Protocol 3 in [10]). In particular, the
requirement that the key derivation function be preimage resistant appears unnecessary. Non-
reliance on hash functions is advantageous because history has shown that secure hash functions
are difficult to design.
6 Related protocols
This section presents two related protocols: a one-pass AK protocol (Protocol 2), and a three-pass
AKC protocol (Protocol 3).
6.1 One-pass authenticated key agreement protocol
The purpose of the one-pass AK protocol (Protocol 2) is for entities A and B to establish a shared
secret by only having to transmit one message from A to B. This can be useful in applications
where only one entity is on-line, such as secure email and store-and-forward. Protocol 2 is
depicted in Figure 2. It assumes that A a priori has an authentic copy of B's static public key.
Domain parameters and static keys are set up and validated as described in x3.
RA
Figure
2: One-pass AK protocol (Protocol 2)
Protocol 2.
1. A generates a random integer r A , computes the point
sends this to B.
2. A computes s
terminates the protocol run with failure.
3. B does an embedded key validation of RA (see x3.2). If the validation fails, then B
terminates the protocol run with failure. Otherwise, B computes s
n and terminates the protocol run with failure.
4. The shared secret is the point K.
Heuristic arguments suggest that Protocol 2 offers mutual implicit key authentication. The
main security drawback of Protocol 2 is that there is no known-key security and forward secrecy
since entity B does not contribute a random per-message component. Of course, this will be
the case with any one-pass key agreement protocol.
6.2 Authenticated key agreement with key confirmation protocol
We now describe the AKC variant of Protocol 1. It is depicted in Figure 3. Domain parameters
and static keys are set up and validated as described in x3. Here, MAC is a message authentication
code algorithm and is used to provide key confirmation. For examples of provably secure and
practical MAC algorithms, see Bellare, Canetti and Krawczyk [5], Bellare, Guerin and Rogaway
[6], and Bellare, Kilian and Rogaway [7]. H 1
are (independent) key derivation functions.
Practical instantiations of H 1
include H 1
Protocol 3.
1. A generates a random integer r A , computes the point
sends this to B.
2. 2.1 B does an embedded key validation of RA (see x3.2). If the validation fails, then B
terminates the protocol run with failure.
2.2 Otherwise, B generates a random integer r computes the point
RA
Figure
3: Three-pass AKC protocol (Protocol
2.3
B terminates the protocol run with failure. The shared secret is K.
2.4 B uses the x-coordinate z of the point K to compute two shared
(z) and
(z).
sends this together with RB to A.
3. 3.1 A does an embedded key validation of RB . If the validation fails, then A terminates
the protocol run with failure.
3.2 Otherwise, A computes s
terminates the protocol run with failure.
3.3 A uses the x-coordinate z of the point K to compute two shared
(z) and
(z).
3.4 A computes MAC - 0
verifies that this equals what was sent by
B.
3.5 A computes MAC - 0
sends this to B.
4. B computes MAC - 0
verifies that this equals what was sent by A.
5. The session key is -.
AKC Protocol 3 is derived from AK Protocol 1 by adding key confirmation to the latter.
This is done in exactly the same way AKC Protocol 2 of [10] was derived from AK Protocol 3 of
[10]. Protocol 2 of [10] was formally proven to be a secure AKC protocol. Heuristic arguments
suggest that Protocol 3 of this paper is a secure AKC protocol, and in addition has all the
desirable security attributes listed in x2.
7 Concluding remarks
The paper has presented new AK and AKC protocols which possess many desirable security
attributes, are extremely efficient, and appear to place less burden on the security of the key
derivation function than other proposals. Neither of the protocols proposed have been formally
proven to possess the claimed levels of security, but heuristic arguments suggest that this is the
case. It is hoped that the protocols, or appropriate modifications of them, can, under plausible
assumptions, be proven secure in the model of distributed computing introduced in [10].
Acknowledgements
The authors would like to thank Simon Blake-Wilson and Don Johnson for their comments on
earlier drafts of this paper. Ms. Law and Dr. Solinas would like to acknowledge the contributions
of their colleagues at the National Security Agency.
--R
"Minding your p's and q's"
"Keying hash functions for message authenti- cation"
"XOR MACs: New methods for message authentication using finite pseudorandom functions"
"The security of cipher block chaining"
"Random oracles are practical: a paradigm for designing efficient protocols"
"Entity authentication and key distribution"
"Key agreement protocols and their security analysis"
"On the risk of opening distributed keys"
"An improved protocol for demonstrating possession of discrete logarithms and some generalizations"
"Towards practical 'proven secure' authenticated key distri- bution"
"New directions in cryptography"
"Authentication and authenticated key ex- changes"
"A remark concerning m-divisibility and the discrete logarithm in the divisor class group of curves"
"Cryptographic method and apparatus for public key exchange with authenti- cation"
Contribution to ANSI X9F1 working group
"Authenticated multi-party key agreement"
Contribution to ANSI X9F1 and IEEE P1363 working groups
"A key recovery attack on discrete log-based schemes using a prime order subgroup"
"On seeking smart public-key distribution sys- tems"
"Reducing elliptic curve logarithms to logarithms in a finite field"
"Some new key agreement protocols providing mutual implicit authentication"
"Key agreement and the need for authentication"
Handbook of Applied Cryptography
"Secure Hash Standard (SHS)"
"Digital signature standard"
"SKIPJACK and KEA algorithm specification"
"An improved algorithm for computing logarithms over GF (p) and its cryptographic significance"
"Monte Carlo methods for index computation mod p"
"Fermat quotients and the polynomial time discrete log algorithm for anomalous elliptic curves"
"Evaluation of discrete logarithms in a group of p-torsion points of an elliptic curve in characteristic p"
"The discrete logarithm problem on elliptic curves of trace one"
"On Diffie-Hellman key agreement with short expo- nents"
"A key distribution paradox"
--TR
--CTR
Johann Groschdl , Alexander Szekely , Stefan Tillich, The energy cost of cryptographic key establishment in wireless sensor networks, Proceedings of the 2nd ACM symposium on Information, computer and communications security, March 20-22, 2007, Singapore
Kyung-Ah Shim , Sung Sik Woo, Cryptanalysis of tripartite and multi-party authenticated key agreement protocols, Information Sciences: an International Journal, v.177 n.4, p.1143-1151, February, 2007
Kyung-Ah Shim, Vulnerabilities of generalized MQV key agreement protocol without using one-way hash functions, Computer Standards & Interfaces, v.29 n.4, p.467-470, May, 2007
Maurizio Adriano Strangio, Efficient Diffie-Hellmann two-party key agreement protocols based on elliptic curves, Proceedings of the 2005 ACM symposium on Applied computing, March 13-17, 2005, Santa Fe, New Mexico
Mario Di Raimondo , Rosario Gennaro , Hugo Krawczyk, Secure off-the-record messaging, Proceedings of the 2005 ACM workshop on Privacy in the electronic society, November 07-07, 2005, Alexandria, VA, USA | diffie-hellman;key confirmation;authenticated key agreement;elliptic curves |
636898 | Modeling random early detection in a differentiated services network. | An analytical framework for modeling a network of random early detection (RED) queues with mixed traffic types (e.g. TCP and UDP) is developed. Expressions for the steady-state goodput for each flow and the average queuing delay at each queue are derived. The framework is extended to include a class of RED queues that provides differentiated services for flows with multiple classes. Finally, the analysis is validated against ns simulations for a variety of RED network configurations where it is shown that the analytical results match with those of the simulations within a mean error of 5%. Several new analytical results are obtained; TCP throughput formula for a RED queue; TCP timeout formula for a RED queue and the fairness index for RED and tail drop. | INTRODUCTION
The diverse and changing nature of service requirements among Internet applications mandates
a network architecture that is both flexible and capable of differentiating between the
needs of different applications. The traditional Internet architecture, however, offers best-effort
service to all traffic. In an attempt to enrich this service model, the Internet Engineering Task
Force (IETF) is considering a number of architectural extensions that permit service discrimi-
nation. While the resource reservation setup protocol (RSVP) [1], [2] and its associated service
classes [3], [4] may provide a solid foundation for providing different service guarantees, previous
efforts in this direction [5] show that it requires complex changes in the Internet architecture.
That has led the IETF to consider simpler alternatives to service differentiation. A promising
approach, known as Differentiated Services (DiffServ) [6], [7], [8] allows the packets to be classified
(marked) by an appropriate class or type of service (ToS) [9] at the edges of the network,
while the queues at the core simply support priority handling of packets based on their ToS.
Another Internet-related quality of service (QoS) issue is the queue's packet drop mechanism.
Active queue management (AQM) has been recently proposed as a means to alleviate some
congestion control problems as well as provide a notion of quality of service. A promising
class is based on randomized packet drop or marking. In view of the IETF informational RFC
Random Early Detection (RED) [11] is expected to be widely deployed in the Internet.
Two variants of RED, RIO [12] and WRED [13] have been proposed as a means to combine the
congestion control features of RED with the notion of DiffServ.
This work aims to make a contribution towards analytical modeling of the interaction between
multiple TCP flows (with different round-trip times), non-responsive flows (e.g. UDP)
and active queue management (AQM) approaches (e.g. RED) in the context of a differentiated
services architecture. A new framework for modeling AQM queues is introduced and applied
to a network of RED queues with DiffServ capabilities termed DiffRED which is similar to the
RIO and WRED algorithms. Our work complements previous research efforts in the area of
Manuscript first submitted December 7, 2000.
A. A. Abouzeid is with the Department of Electrical, Computer and Systems Engineering, Rensselaer Polytechnic Institute,
8th St., Troy NY 12180, USA (e-mail: [email protected]).
S. Roy is with the Department of Electrical Engineering, University of Washington, Box 352500, Seattle WA 98195-2500,
USA (e-mail:[email protected]).
Part of the material in this paper (Section III.A,B) was presented at IEEE Global Communications Conference, San Francisco,
November 2000, and is included in a revised form in this paper.
TCP modeling under AQM regimes. Specifically, this work has the following unique contributions
C1. TCP timeout in AQM Networks: We derive an expression for the probability of a timeout
that reduces the inaccuracy by as much as an order of magnitude vis-a-vis previous state-of-art
[14], [15]
C2. Poisson Process Splitting (PPS) Based Analysis for TCP flows: We show by analysis
and simulations that the PPS approximation holds for a population of heterogeneous (in terms
of delay) TCP flows sharing a bottleneck RED queue.
C3. Fairness Results: We derive expressions for the fairness index of a population of heterogeneous
TCP flows for the case of RED and compare it to Tail Drop.
C4. Network of AQM (RED) Routers: We extend the result to a network of AQM queues
with TCP (and UDP) flows.
C5. Network of DiffRED Routers: Finally, the results are extended to a network of RED and
DiffRED queues.
While the details of the above contributions are presented in the following sections, we first
comment on the relation between the above contributions and previous/concurrent related work.
Literature Review
The steady-state throughput of a single TCP-Reno (or NewReno) flow with Tail Drop (TD)
queueing policy where packet drops can be modelled as an i.i.d loss with probability p has
been shown to have the well-known inverse- # p dependance. This result was derived by several
authors [16], [17], [18] using different analytic approaches; for example, [16] used a non-homogeneous
Poisson process approximation of the TCP flow while [17], [18] use a fixed point
approximation method, both yielding the same dependance to within a multiplicative constant.
This result which we call throughput formula1, does not incorporate the effect of TCP timeout
mechanism.
In [14], the above was extended to include timeouts for a Tail-Drop queue for scenarios where
it is reasonable to assume that conditioned on a packet loss, all remaining packets in the same
window of data are also lost (i.e. burst loss within the same window round). Based on this
assumption, the authors derive a more accurate formula which we term throughput formula2,
for the steady-state TCP throughput. Specifically, this derivation includes (as an intermediate
step) an expression for the probability, given a packet loss, that this loss will be detected by a
timeout - we term this the the timeout formula. An alternative timeout probability was derived
in [15] with independent packet losses (i.e. without the assumption of burst loss within a loss
window). However, the result is not available in closed form and requires the knowledge of the
window size probability distribution function.
Building on the above two formulae for the throughput of an individual TCP flow, Bu et al.
[19] extend the analysis to a network of RED queues by solving numerically coupled equations
representing the throughput for each individual flow. A similar approach is adopted in [20],
[21].
Our contribution C4 listed above is similar to [19], [20], [21] insofar that we also seek to
model a network of AQM queues with TCP flows. However, unlike [19], [20], [21], we do
not use any of the formulae derived earlier. Specifically, we show that the timeout formula
derived in [14] largely over-estimates the probability of a timeout detection for an AQM queue
due to the assumption of burst loss within a loss window, via analysis that is supported by ns2
simulation. Thus in C1, we derive a new formula for the timeout probability and compare the
accuracy of the new formula with those derived earlier.
In [11], the authors predict that the well known bias of TCP against connections with larger
delay sharing a bottleneck Tail Drop queue will only be slightly affected by the introduction
of RED without any definitive analytical support. Our results in C2, C3 precisely quantify the
amount of unfairness experienced by TCP flows over a bottleneck RED vis-a-vis a Tail Drop
Packet Drop ProbabilityAverage Queue (packets)
minth maxth
Fig. 1. RED packet drop probability as a function of
the average queue size
(1)
(2)Average Queue Size
(1)
(2)
Drop Probability
minth maxth
Fig. 2. Schematic of the DiffRED packet drop probability
for two classes of service.
queue by using a PPS approximation (that is shown to hold within an acceptable error margin)
and hence deriving a fairness index for a population of heterogeneous TCP flows for the two
cases of RED and Tail Drop.
Paper outline
The paper is organized as follows. In Section II, we present the network model considered
in this paper, describe the RED and DiffRED algorithms and outline our modeling approach.
Section III considers a single congested RED queue, the PPS property is shown to hold for a
population of heterogeneous flows, the analytical results for a single RED queue are validated
against ns simulations and the fairness results of RED versus Tail-Drop are presented. Section
IV presents a derivation of our timeout formula and a comparison between our results and
previous results from [14] and [15] against ns simulations. Section V presents the analysis of
a network of DiffRED 1 queues with TCP and UDP flows. The network model validation is
presented in Section VI. Section VII concludes the paper outlining future extensions.
II. MODELING AND NOTATION
A. The RED Algorithm
We briefly review the original RED algorithm (see [11] for further details). A RED router
calculates a time-averaged average queue size using a low-pass filter (exponentially weighted
moving average) or smoother over the sample queue lengths. The average queue size is compared
to two thresholds: a minimum threshold (minth) and a maximum threshold (maxth).
When the average queue size is less than minth no packets are dropped; when the average
queue size exceeds maxth, every arriving packet is dropped. Packets are dropped probabilistically
when the time-averaged queue size exceeds minth with a probability p that increases
linearly until it reaches maxp at average queue size maxth as shown in Fig. 1. RED also has
an option for marking packets instead of dropping them.
B. The DiffRED algorithm
We propose to investigate a DiffRED algorithm that is identical to the original RED algorithm
except for the following important change - DiffRED differentiates between different packets
by appropriately labelling each with the class it belongs to. If a packet is not labelled, it will
be assigned the default (lowest) class. The RED queue associates a priority coefficient, denoted
by # (c) , for each class 1 # c # C where C is the total number of classes. The lowest priority
class corresponds to # and the c-th (higher) priority class packets have coefficient # (c) ,
It will be shown in Section II that RED is a special case of a DiffRED queue with no service differentiation.
Packet Drop Probability(packets)
minth
Average Queue
Fig. 3. Packet drop probability for the "Gentle" variant
of RED.
Packet Drop Probability(packets)
minth
Average Queue
(2)
(1)
Fig. 4. Packet drop probability for the "Gentle" variant
of DiffRED.
# (2)
# (c)
# (C) . When a DiffRED queue receives a packet, it
updates the average queue size and drops the packet with probability equal to p # (c) for a class
c packet. Thus, for the same congestion level (i.e. average queue size), higher priority packets
are dropped with a lower probability. A schematic diagram of the DiffRED dropping algorithm
for the case of depicted in Figure 2. The plot shows that the packet drop probability
function (as a function of the average queue size) has a lower slope for the higher priority class
(for while the drop function for the lower priority class has a higher
slope.
C. The Gentle Variant of RED
In the recently recommended "gentle" 2 variant of RED [22], the packet-dropping probability
varies from maxp to 1 as the average queue size varies from maxth to 2maxth . Figures 3-4
depict the packet drop probability for RED and DiffRED with this option enabled. Our model
applies equally well to both cases, but the model with the "gentle" option requires less restrictive
assumptions and hence is applicable to a wider range of parameters as will be discussed in
Section II-E.
D. Network Model and Notations
Assume a network of V queues that support M TCP flows and N UDP flows. Each TCP
connection is a one way data flow between a source S j and destination D j , with reverse traffic
consisting only of packet acknowledgments (ACKs) for each successfully received packet at the
destination (i.e. no delayed-ACKs). Throughout the paper, we will assume that the TCP flows
are indexed by 1 while UDP flows are indexed by M <
shows a network with denote the total round-trip
propagation and transmission delays for the j th , flow (not including queuing
delay) and let - v denote the v th queue link capacity (packets/sec.
Denote a zero-one (M +N)-V dimensional order matrix O with elements
that specifies the order in which the j th flow traverses the v th queue. A 0
entry indicates that the corresponding flow does not pass through that queue. Since each flow
passes through at least one queue, each row in O must have at least one entry equal to 1.
In this work, we consider TCP-NewReno and assume the reader is familiar with the key
aspects of the window adaptation mechanism [23], [24] - i.e., the two modes of window increase
(slow start and congestion avoidance) and the two modes of packet loss detection (reception of
multiple ACKs with the same next expected packet number or timer expiry). Our analysis
This option makes RED much more robust to the setting of the parameters maxth and maxp.
S5
Fig. 5. Experimental network topology with two RED queues.
applies best to TCP-NewReno since we assume that multiple packets within the same loss
window will trigger at most one duplicate ACK event (rather than one for
Modeling Approach
The primary challenge facing any analytical approach is to capture the complex interplay of
TCP congestion control algorithm and active queue management policies as RED. We adopt
a mean value analysis (using fixed point approximation) to estimate (long-term) averages of
various parameters as done in [17], [18], [19], [25], [26], while recognizing that such analysis
does not allow for accurate prediction of higher order statistical information (e.g. variances).
We next state and discuss the following modeling assumptions:
A1: A congested queue is fully utilized (except possibly at some isolated instants).
A2: Each TCP flow passes through at least one congested queue.
A3: The probability of a forced packet loss is negligible. 3
A1 is a well known result that has been observed in the literature through Internet measurements
as well as simulations and was also confirmed in all the simulations we performed. A2
is reasonable if the TCP flows window size are not limited by the receiver's advertised window
sizes. A3 implies that we model only 'unforced' (or probabilistic) RED packet losses and ignore
forced losses which should be avoided when configuring RED queue parameters 4 . Note that it
is much easier to configure a RED (or DiffRED) queue to avoid forced packet drops when the
"gentle" option is used, as noted in [22] due to its avoidance of the abrupt increase in the drop
probability from maxp to 1. Nevertheless, our analysis results hold even if the "gentle" option
is not used, as long as A3 holds.
III. SINGLE QUEUE
The aim of this section is two-fold: (i) we outline an analysis using the PPS approximation 5
and (ii) derive a result for the probability that a loss is detected by a timeout (TO) and not
Duplicate Acknowledgment (DA). Both of these results will be used in the following section in
a network (rather than a single queue) set-up.
A. PPS Based Analysis for TCP flows
Consider M Poisson processes, each with an arrival rate # j at a single queue that randomly
drops packets with a probability p. Let P i,j denote the probability that the i th packet loss event
belongs to the j th flow. From the PPS (Poisson Process Splitting) property,
(1)
independent of i.
3 Forced packet losses are those due to either buffer overflow or due to packet drop with probability one.
4 This assumption has been adopted in many previous AQM-related research efforts (e.g. [27]).
5 Part of this Section was presented in a preliminary form in [28].
Consider now M TCP flows - let X i denote the i-th inter-loss duration at the queue as shown
in Fig. 6 and Q i,j the queue size for session j packets at the end of epoch i. Then the net
round-trip propagation and queuing delay for connection j at the end of the i th epoch is #
flows share the same buffer, Q the
steady-state average queue size.
Let W i,j denote the window size of the j th flow just before the end of the i th epoch and W av j
denote flow j time-averaged window size (as opposed to E[W i,j that represents the mean
window size just prior to the end of the epoch).
We postulate that, for M TCP flows sharing a queue,
(2)
Equation 2 is a postulate that the PPS approximation (1) holds for a population of TCP flows
sharing a random drop queue. To justify the above we first consider the case of TCP flows that
are perpetually in the congestion avoidance phase (additive increase - multiplicative decrease
(AIMD) mode of window size) and use an analytical description of AIMD to derive expressions
for the steady state average window and throughput. These expressions are then validated
against ns simulations in support of (2). A modified PPS approximation for the case of timeouts
is postponed to Sec. V.
The window evolution of the TCP-Reno sessions is governed by the following system of
equations (refer to Figure 6),
w.p. P i,j
Taking the expectation of both sides of (3)
where we used the independence approximation E[W i,j P fixed point approximation
Similarly, denoting #
, then, by fixed point approximation to (2),
. (5)
Substituting by (5) in (4),
Equation (6) is a system of M quadratic equations in the M unknowns W j (or #
it can be readily verified by direct substitution that
time
Window
4,1
4,2
Fig. 6. A schematic showing the congestion window size for two (i.e. sessions with different
round-trip delays sharing a bottleneck link with RED queue. Packet loss events are indicated by an X.
is an explicit solution for (6) and hence,
Remarkably, the above result implies that the average steady-state window size is the same for
all flows; also, W j implicitly depends on Q and X that need to be determined.
To relate the above result for W j to W av j
, let W # i,j denote flow j window size just after the
start of epoch i. It is straightforward to verify that
and from (3)
hence
where
Finally, let r j denote flow j steady-state throughput (in packets/sec). Then,
In case of full bottleneck link utilization,
which, by substituting from (14) and
Finally, let p denote the average packet drop probability of the RED queue. Then,
(RED or TD)G Sink
Bottleneck Linkt
l
nt
Fig. 7. Schematic of multiple TCP connections sharing a bottleneck link.
I
th max th 1/max p Q (an.) Q
Since the RED average packet drop probability also satisfies
Q- min th
th - min th
a relationship between Q and X follows by equating (17) and (18).
B. Single Queue Validation
The analytic expressions for various parameters of interest for RED queues derived in the
preceding section will be compared against results from ns simulations.
Figure
7 depicts the ns simulation model used for validating the above results. M TCP flows
share a common bottleneck link with capacity - packets/sec and (one-way) propagation delay
# . Forward TCP traffic is a one way flow between a source S j and the sink, with reverse traffic
consisting only of packet acknowledgments (ACKs) for each successfully received packet at the
sink. We consider the long run (steady-state) behavior and assume that the sources always have
packets to transmit (e.g. FTP of large files). The j-th flow encounters a one-way propagation
delay # j , and is connected via a link with capacity # j packets/sec. to the bottleneck queue. The
total round-trip time for propagation and transmission (but excluding queuing) for the j-th flow
is thus
Table
I lists the parameters used for two sets of simulation experiments with RED policy using
the ns simulator [29] (more validation experiments are available in [30]); the link capacities
- and # j (packets/second), propagation delays # and # j (seconds) for the topology in Fig. 7 as
well as the number of competing flows used for each simulation. In each simulation, a value
for the link delay # 1 of the first flow and a factor d > 1 is chosen such that the remaining # j
are specified according to # i.e. the access link (from source to queue) propagation
delay profile increases exponentially. The values of # 1 and d for each simulation are listed in
Table
I. For each simulation we compare the analysis results to the simulations measurements
of Q in
Table
I. All simulations use equal packet size of 1 KBytes. Figures 8-9 show the
measured and the computed r j for each competing flow, where the x-axis indicates the computed
We choose to show the un-normalized r j (packets/second) rather than the normalized
Average round-trip time E[g (seconds)
Average
transmission
rate
(packets/second)
Analysis
Simulation
Fig. 8. Results for Exp.1.
Average
transmission
rate
(packets/second)
Analysis
Simulation
Fig. 9. Results for Exp.2.
throughput since the difference between the simulations measurement and the analysis results
are indistinguishable if shown normalized to the link capacity.
C. Fairness Properties of RED versus Tail-Drop
TCP's inherent bias towards flows with shorter round-trip time as a result of its congestion
window control mechanism is well-known - in [31], it has been shown that when multiple TCP
flows with different round-trip times share a Tail Drop queue, the steady-state throughput of a
flow is inversely proportional to the square of the average round-trip delay (see Appendix A for
a simple derivation of this result). Our analysis/simulations results (specifically, (9) and (14))
of a single RED queue with multiple TCP flows and without timeouts (see comment at the end
of this section) reveal a dependence of the throughput that differs from that of Tail Drop.
Consider parameter K j defined by (13). For a population of heterogeneous flows, P j is
directly proportional to the TCP flow throughput and hence inversely proportional to the round-trip
delay - K j increases with round-trip delay. Since
mild variation. Hence, for a large number of flows, the dependence of flow throughput on K j
can be neglected 6 . Thus, the conclusions in the remainder of this section will be a lower bound
to the improvement in fairness compared to Tail Drop.
With the above assumptions (i.e. neglecting the effect of K j ), the results of (9) and (14)
show that the average transmission rates of competing connections at a congested RED queue
is inversely proportional to the average round-trip times and not to the square of the average
round-trip times as in Tail Drop. A fairness coefficient # may be defined as the ratio of the lowest
to the highest average throughput among the competing flows sharing a queue (i.e. ideally,
denote the fairness coefficients of the equivalent 7 Tail
Drop and RED queues. Then,
implying that the RED queue achieves a higher fairness coefficient.
The previous definition of fairness only considers the flows with the maximum and minimum
transmission rates; in the pioneering work of [32], the following index #(r) was introduced to
quantify fairness based on the rates of all flows:
ideally fair allocation (r results in #(.) = 1; and a worst
case allocation ( r
, which approaches zero for large n.
6 In the limit as the number of flows tends to infinity, K flows. Note that if the impact of K j were to be
included, it would help decrease TCP bias against larger propagation delay since it increases with the round-trip time.
7 An equivalent queue is one that has identical TCP flows and the same average queue size ([11])
With this definition, the fairness index for a queue with n TCP flows yields
As an illustrative example, consider TCP flows with an exponentially decaying (or increasing)
delay profile with factor d. Then, assuming large n (and hence neglecting the effect of K j for
the case of RED), it follows after some simple algebraic manipulations that
#RED (d) =n
and
i.e. # TD
An intuitive explanation for this decrease in bias (i.e. fairness coefficient closer to one) for
RED is in order. TCP increases its window size by one every round-trip time - thus, lower delay
links see a faster increase in their window size. In case of synchronized packet loss (Tail Drop
queues), all windows are (typically) reduced at the same time, and hence the average window
size is inversely proportional to the average round-trip time. But since the throughput of a flow
is proportional to the window size divided by the round-trip time, the average rate of each TCP
session is inversely proportional to the square of the average round-trip time. In case of RED,
when a packet is dropped, the chances that the packet will belong to a certain connection is (on
the average) proportional to that connection transmission rate- thus TCP sessions with lower
delays are more likely to have their windows reduced. The analytical results show that this
causes the average window size to be equal for all connections (6) and thus the throughput of
a connection is only inversely proportional to the round-trip-delay. Thus while the basic RED
algorithm does not achieve throughput fairness among the competing flows, it substantially
reduces the bias as compared to Tail Drop.
Finally, the above fairness results do not take into account the effect of TCP timeouts. For
reasons previously shown via simulations in the literature, the occurrence of timeouts will, in
general, increase the unfairness in TCP against longer delay links.
IV. TIMEOUT FORMULA FOR A TCP FLOW WITH RANDOM PACKET LOSS
In this section, we first derive an expression for the probability that a packet loss event will be
detected by a timeout (TO) (i.e. not a duplicate ACK detection (DA) event). Next, we compare
our result and those in the literature against ns simulations.
A. Analysis
Consider an instant during a TCP-Reno session at which the window size is in the congestion
avoidance phase. Let w denote the current congestion window size, in packets, during which at
least one packet loss takes place; each packet is assumed lost independently with probability p.
Let the current round be labelled as round i for reference (see Figure 10). Let P (w,
the probability, conditioned on at least one packet loss during the current window round, that
packet k will be the first packet lost during the round. Then,
The k-1 packets transmitted before the first loss in the current round will cause the reception
of k - 1 ACKs in the next round (i hence the release of an equal number of packets.
Packet sequence number
w-k
round i round i+1 round i+2
new packets
Time
Fig. 10. Analysis of the probability of a TO vs. TD. Successful packets (i.e. not dropped), dropped packets, and
ACK packets for packet k are represented by white-filled, completely shaded and partly shaded rectangles
respectively. In this example,
denote the number of duplicate ACKs received in round due to the
successful transmission of N 2 out of the k-1 packets in round i+1. Similarly, consider the w-k
packets transmitted after the first loss during round i (the current round), and let N
denote the number of packets successfully transmitted out of the w - k packets. Notice that,
unlike the N 2 packets, the N 1 packets are transmitted after the first loss packet in the current
round. Hence, N 1 duplicate ACKs will be generated immediately in round i + 1.
Let D(w, denote the probability that a DA detection takes place in round 2.
Then,
Notice that the above expression is independent of k. This is expected, and can be stated simply
as the probability of at least 3 out of
Let Q(w) denote the probability of a TO detection in rounds 2. Then
For retaining only the most significant terms yields
(w - 1)(w - 2)p (w-3) (27)
Following a DA detection, the window is halved and TCP attempts to return to its normal
window increase mode in congestion avoidance. However, one or more losses may take place
in the immediate following round (i.e. before the window is increased). In this paper, consecutive
congestion window rounds in which packet loss is detected by DA without any separating
additive window increase will be treated as one loss window, that ultimately terminates via either
a DA or a TO. Let Z(w) denote the probability that a TO will take place (either immediately
or following a sequence of consecutive DA's). Then,
log w
where the logarithm is base 2.
Finally, recalling that W i denotes the window size at the end of the i th epoch, applying a fixed
point approximation to (28) yields
where Z(W ) is given by (28).
validation and comparison with previous results
In this section, previous results for Z(w) are listed and compared with the results of a representative
simulation using ns consisting of multiple TCP flows sharing a bottleneck RED queue.
In [14], the following expression for Z(W ) was derived
assuming that, once a packet is lost, all remaining packets in the same congestion window are
also lost (which is probably reasonable for a Tail Drop queue). In [19], the authors argue that
this is still a good result when applied to an AQM like RED which drops packets independently.
We show here that this is not true.
In [15], the following expression for Z(w) was derived
which is the same as our expression for Q(w) in the previous section (26). However, the authors
do not analyze the case where multiple consecutive DA's ultimately cause a TO. Also, the
expression for E[Z(w)] was not derived in [15], since the analysis relies on the knowledge
of the congestion window probability distribution function. Nevertheless, we show here that
applying a fixed point approximation to (31), yielding
would result in an under-estimation of the timeout probability.
In order to compare between the accuracy of our result (29) and the previous results (30)
and (32) when applied to a RED queue, we construct the following simulation using ns. A
single congested RED queue is considered with a number of identical TCP flows ranging from
5 to 40. The RED parameters are set as follows:
and ECN disabled. The link capacity is 1 Mbps and two-way propagation delay (no including
queuing) is 20 ms. The packet size is 1 KB. The buffer size is selected large enough to avoid
buffer overflow. The trace functionality within ns was enhanced so as to record, for one of the
flows, the time instant, congestion window size and the reason (TO or DA) for each window
decrease. Post-processing scripts were used to compute the average window size just before a
packet loss detection (i.e. W ), the average packet drop through the queue (p), the total number
of packet loss detection events that are detected by TO and those detected by DA, and hence the
0.40.81.2
Number of TCP flows
Measured E[W]/10
Measured E[p]
Measured TO prob. E[Z(W)]
Z(E[W]) from our analysis
Z(E[W]) from Padhye et al.
Fig. 11. Comparison between different timeout formulae against ns simulations for TCP-NewReno.
measured E[Z(W )]. Finally, the measured p and W from the simulations were substituted in
(29), (30) and (32) to compute Z(W ). The results are shown in Figure 11.
Figure
11 does not show the results from (32) since the values produced are practically 0,i.e.,
excessively under-estimates the probability of a timeout. Figure 11 shows that our result is
much more accurate than that of [14] (as expected). The improvement accuracy is up to an order
of magnitude for small number of competing flows (and hence small p). The reason is that, for
small p, the assumption of conditional burst loss used in [14] becomes very unrealistic. As p
increases, the RED queue operation approaches that of Tail Drop, since p approaches maxp
(and hence all arriving packets are dropped with higher probability, resulting in synchronous
operation of the flows and thus burst losses). However, even for high p, our formula is more
accurate than that of [14].
V. NETWORK ANALYSIS
Since a RED queue is a special case of a DiffRED queue with # we present an
analysis for the case of DiffRED queues. In the first subsection, we consider a network of TCP
and UDP flows and we model the additive increase/multiplicative decrease dynamics of TCP
only. Next, we extend the model results to capture the TCP timeout behavior.
A. Modeling without timeouts
Let minth v , maxth v and maxp v
(c) denote the DiffRED parameters of the v th queue. Let
(c) denote the priority coefficient of class c, where # v
denote the average
(steady state) packet drop probability in the v th queue. Let p v
(c) denote the average packet drop
probability for class c at the v th queue. Then for the case of the "gentle" variant,
qv -minthv
th
th
(c)
while for the case without the "gentle" variant,
qv -minthv
According to the description of the DiffRED dropping algorithm (Figure 2)
where (35) assumes that all the DiffRED queues in a network use the same value of the class
priority coefficients (i.e. # (c) is independent of v) 8 .
Let #(j) and p v (j) denote the priority coefficient and average drop probability at the v th
queue for the class that flow j belongs to. For the case of the "gentle" variant, it follows
from (33) and (35) that
-maxp (1)
while for the case without the "gentle" variant
Let r j denote the steady state packet transmission rate of flow j. Let the generator matrix
G denote the fraction of the j th flow transmission rate at the input of the v th queue. Then the
elements of G are
h(o(j, v), o(j, k)) (38)
where
h(o(j, v), o(j,
denote the steady state probability that the dropped packet belongs to the v th queue.
Applying A4,
Let P j|v denote the probability that, conditioned on a packet loss from the v th queue, that
packet loss belongs to the j th flow, then, using A4,
Let P j denote the probability that a packet loss belongs to the j th flow. Then,
which by substituting from (40) and (41) yields,
8 It is possible to extend the analysis by assuming that each queue has a different set of priority coefficients by defining #v (c)
for each queue. This may be specifically valuable when modeling the interaction between DiffServ enabled and non-DiffServ
queues. However, for initial model simplicity, we postpone this to future extensions.
where the above is simply the ratio of the sum of the packet drop rates of flow j at all the queues
to the sum of the packet drop rates of all flows in the network.
Now, we can consider one of the TCP flows in the network and consider its window evolution.
Let Y i denote the time at which the i th random packet loss (caused by any of the V queues and
causing a packet loss from one of the M event takes place, and X
the i-th inter-loss duration (epoch).
Let the j th TCP session window size (in packets) at the beginning and end of the i th epoch
be denoted by W i,j and W i+1,j respectively. Let q i,v denote the queue size of queue v at the end
of epoch i; then the net round-trip propagation and queuing delay for connection j at the end of
the i th epoch is #
Following the same steps as before, the window evolution of the TCP sessions is governed
by the system of equations (4), where E[#
This system
of equations captures only the TCP additive increase/multiplicative decrease dynamics but not
the impact of timeout - this is done in the next section.
From Sec. III-A,
and
where K j is given by (12).
Substituting by X from (44), P j from (43) and r j from (45) in (4),
Finally, for congested links (A1),
M+N
while for uncongested (underutilized) links,
M+N
The set of M equations (46)-(48) in the M
are then solved numerically as explained in Section IV.
B. Modeling TCP timeout mechanism
In this section, we show how the model can be extended to incorporate the effect of timeouts.
Since it is well known that the slow start behavior has a negligible effect on the TCP steady-state
throughput, we neglect slow start and assume that the window size after a timeout is
set to 1 and that TCP continues in the congestion avoidance phase after some idle duration
(discussed below). For simplicity, only single timeouts are considered since the difference in
performance between a single timeout and multiple timeouts is insignificant (both result in very
time
window
Fig. 12. Approximate timeout behavior. After timeout, slow-start is neglected, and TCP is assumed to resume in
congestion avoidance mode with an initial window size value of one packet.
low throughput). Thus, our objective is to model the approximate timeout behavior depicted in
Fig.12 , where a single timeout causes the window size to be reduced to one, and TCP refrains
from transmission (i.e. is idle) for a period of one round-trip time (# i,j ).
Let Z(W i,j ) denote the probability that, given a packet loss belonging to flow j, that loss will
result in a timeout. Then, following the same steps as before (for
w.p.
w.p.
where
Taking the expectations of both sides and simplifying,
denote the steady-state proportion of time that TCP flow j is busy (b
the non-responsive flows). Following the same derivation steps as before,
and
An expression for derived (and validated) in Sec. IV (equation 29). An expression
for b j is derived as follows.
Let X i,j denote the inter-loss times between packet losses of TCP flow j (as opposed to X i
which denotes the inter-loss times in the network) and consider the window size evolution of
flow j in between two consecutive Duplicate ACK (DA) events (see Fig. 12) Then,
and hence
Let NDA (W j ) denote the average (from a fixed point approximation argument) expected
number of consecutive DA's. Then NDA distribution with parameter
and finally,
Thus, a set of M equations in the M+V unknowns is obtained by substituting by b j from (57)
and P j from (52) in (51). The remaining V equations correspond to (47)-(48) with modification
to include the timeout behavior yielding,
M+N
for fully utilized links and
M+N
for underutilized links.
A solution for the M +V unknowns is obtained by solving the M +V equations numerically
as explained in the following section.
VI. NETWORK MODEL RESULTS
In order to validate the analysis presented in the previous sections, we present a representative
number of comparisons between the numerical results obtained from the solution of the M +V
non-linear equations and the simulation results from the ns-2 simulations.
The simulations reported here use the "gentle " option of RED in the ns-2 simulator set to
"true" (i.e. it uses the gentle variant) and the "setbit " option set to false (i.e. packet drops
instead of marking). Other experiments without the "gentle" option provide results with similar
accuracy (not reported here due to space constraints).
We first describe the technique used for solving the non-linear equations. We then summarize
the modifications to the RED algorithm in ns-2 to simulate the DiffRED behavior. And finally,
we present the validation experiments.
A. The network solver
In the analysis section, the V +M unknowns (W j ,
are related by a set of non-linear equations. In general, a set of non-linear equations
can be solved using a suitable numerical technique. Two main problems encountered in solving
non-linear equations are; (i) the uniqueness of the solution and (ii) the choice of an appropriate
initial condition that guarantees the convergence towards a true solution. In this section, we
describe the algorithm used to achieve both objectives.
Our algorithm is based on the observation that there exists a unique solution for (46)-(48)
that satisfies 0 # q v # maxth v and W j > 0. Thus, the network solver is composed of the
following two steps (or modules):
Step I
The set of equations (46)-(48) is solved using any iterative numerical technique. We used a
modified globally convergent Newton-Raphson technique [33] that operates as follows: (1) It
chooses initial conditions randomly within the solution space; (2) Performs Newton-Raphson
technique while checking for convergence; (3) If the algorithm converges to a valid solution,
the program terminates; else, the program repeats from step (1) again.
Note that the resulting solution does not take into account timeouts or non-responsive flows -
hence, step II.
Step II
This step of our algorithm uses the solution provided in step I as initial conditions - it also uses
a globally convergent Newton-Raphson technique applied this time to the extended modeling
equations describing the timeout behavior in (51)-(52), (58)-(59) (and, if applicable, the UDP
behavior).
Thus, the network solver is composed of two modules, each of which uses an iterative numerical
method; the first module solves the simplified model and thus provides an approximate
solution for the network which is then fed into the next module that refines the network solution
providing a more accurate one. This technique has proved to converge to the correct solution in
all experiments we tried, a representative of which (a total of 160 experiments) are reported in
the following sections.
B. Modifying ns-2 to model DiffRED
The following modifications to ns-2 were incorporated. In the packet common header, we
added a priority field that contains an integer specifying the priority class of the packet. Sim-
ilarly, an additional parameter for the TCP object specifying its class of service was added;
when a TCP session is established, this parameter is set to the selected class of service for that
session. The TCP mechanism is also modified to label each packet by the TCP session priority
(by copying the TCP session class of service to the packet header of the packet being transmit-
ted). Finally, the RED algorithm is modified so as to first check for the packet class (from the
packet header) and compute the packet drop probability according to the mechanism outlined
in Sections II.B (for the case without the "gentle" option) or II.C (for the "gentle" variant).
C. Experimental Topology
We use the topology of Figure 5. It consists of two RED/DiffRED queues Q 1 and Q 2 . There
are a total of five sets of flows going through the queues and each flow set is composed of four
identical flows. In the figure, S denotes the source (origin) while D denotes the destination
for those flows. Three sets of flows (S1, S2 and S3) arrive at Q 1 , out of which only one
traverses through Q 2 . Two other sets of flows arrive at Q 2 . The only two bottleneck links
in the topology are those of Q 1 and Q 2 , where the link speeds are - 1 and - 2 (packets/sec) and
In the experiments, we vary the priority class of S2, allowing it to have a higher priority in
some of the experiments to compensate it for the higher loss rate it suffers since it has to traverse
both queues.
Each experiment is composed of five simulation runs, at the end of each we compute the
parameters of interest; average queue size for each queue, average window size for each TCP
flow and the average goodput (number of packets received at the destination per second). The
duration of each run is long enough to guarantee the operation of the network in the steady
state for a long duration. We then compute the average, among the simulation runs, of each
of these parameters and compare with the network solver results. In the figures, we plot one
point (x, y) for each experiment, where x represents the simulation result and y represents the
corresponding result obtained from applying our analytical network solver. Hence, ideally, all
points should lie on a line with a slope of 45 degrees.
II
PARAMETERS SETTINGS FOR THE EXPERIMENTAL NETWORK TOPOLOGY OF FIG.5 AND FIG.16. LINK
size obtained from simulations (packets)
Average
queue
size
obtained
from
analysis
(packets)
Fig. 13. Average queue size obtained from simulations and analysis for the experiments with TCP traffic only.
D. Experiments with TCP only
A total of eighty experiments have been performed with various parameters. The parameters
of five of these experiments are shown in Table I. Three additional similar sets are performed
but by reducing the bottleneck link speeds by half for each set (i.e. the first experiment in the
last set would configure the bottleneck links at 4Mbps) thus forming twenty experiments. These
twenty experiments are repeated four times, with the priority coefficient of the higher priority
flow S2 varying from 0.1 to 1.0 (in steps of 0.3). The results are shown in Figures 13-15.
Note that this also validates the analysis for the RED case, since setting the priority coefficient
to 1.0 corresponds to RED operation.
In
Figure
13, each experiment results in two points, one for each queue. Hence, the figure
contains a total of 160 simulation/analysis comparison points. On the other hand, in Figure 14
and
Figure
15, each experiment results in three comparison points. The reason is that TCP flows
and S3 (on one hand) and S4 and S5 have identical goodput and congestion window size
values. Thus, each experiment renders three window sizes (and three goodput values), one for
and S3, one for S2 and a third for S4 and S5 for a total of 240 comparison points for each
of these figures.
E. Experiments with TCP and UDP
To validate the model results for mixed TCP and UDP traffic, we modify the topology of
Figure
5 to Figure 16 by adding two sets of UDP (constant bit rate) flows, one that originates at
Average window size for TCP flows obtained from simulations (packets)
Average
window
size
for
flows
obtained
from
analysis
(packets)
Fig. 14. average congestion window size from simulations and analysis for the experiments with TCP traffic only.
Average goodput obtained from simulations (kbps)
Average
goodput
obtained
from
analysis
Fig. 15. The average goodput obtained from simulations and analysis for the experiments with TCP traffic only.
S6 and passes through both queues before exiting the network while the other originates at S7
and passes only through Q2. The transmission rates of the UDP flows are set such that the total
transmission rates of S1 and S2 equals 10% of the link capacities - 1 and - 2 . The results are
shown in Figures 17-19.
Just like Figures 13-14, Figures 17-18 contains a total of 160 and 240 simulation/analysis
comparison points (respectively). However, unlike Figure 15 which contains 240 comparison
points Figure 19 contains an additional 160 comparison points accounting for the 2 UDP flows
in each of the 80 experiments.
Finally, in Figure 20, we show the results of varying the priority coefficient for one of the
experiments (the first experiment in Table I) separately. The figure shows how the priority
coefficient is effective in boosting S2 goodput from its low value when to a much
higher value (almost equal to the other TCP flows rate) when
S5
Fig. 16. Experimental network topology with two RED queues.
size obtained from simulations (packets)
Average
queue
size
obtained
from
analysis
(packets)
Fig. 17. Average queue size obtained from simulations and analysis for mixed TCP and UDP traffic.
3051525Average window size for TCP flows obtained from simulations (packets)
Average
window
size
for
flows
obtained
from
analysis
(packets)
Fig. 18. Average congestion window size obtained from simulations and analysis for mixed TCP and UDP traffic.
Average goodput obtained from simulations (kbps)
Average
goodput
obtained
from
analysis
Fig. 19. Average goodput obtained from simulations and analysis for mixed TCP and UDP traffic.
Coefficient for the higher priority flows (Set 2)
Goodput
flows S1 and S3
flows S4 and S5
UDP flows S6
UDP flows S7
Fig. 20. Effect of varying the priority coefficient of S2 on its goodput. The experiment parameters are shown in
the first entry of Table I.
VII. CONCLUSION
In this paper, an analytical model for a network of RED/DiffRED queues with multiple competing
TCP and UDP flows was presented. Unlike previous work, our analysis is specifically
targeted towards AQM schemes which are characterized by random packet drop (unlike Tail
Drop queues which drop packets in bursts). Our main contributions are (i) An accurate timeout
formula that is orders of magnitude more accurate than the best-known analytical formula, (ii)
Closed form expressions for the relative fairness of RED and Tail-Drop towards heterogeneous
flows (iii) Analysis of RED queues in a traditional as well as differentiated services net-
work. Our analysis has relied on a set of approximations to the timeout dynamics as well as to
the loss rate process of each flow in the network. The analytical results were validated against
ns simulations. These show that the results are accurate within a mean margin of error of 2%
for the average TCP throughput, 5% for the average queue size and 4% for the average window
size attributable to the approximations introduced in Section II. The model proposed should
be applicable to a variety of AQM schemes that rely on randomized (instead of deterministic)
packet drops.
A number of avenues for future research remain. First, the model can be extended to the
analysis of short-lived flows and the effects of limiting advertised congestion window. Also
model extensions to other versions of TCP (e.g. Tahoe, SACK,.etc.) and to the case of ECN
[34] may be considered. Finally, as noted in (8), especially with the expected coexistence
of DiffServ networks with best effort networks, an extension of this model that captures the
interaction between traditional RED queues and DiffRED would be valuable.
In Sec. III-C, we have stated that the average window size of a TCP flow passing through a
Tail Drop queue is inversely proportional to its round-trip delay.
Proof: We show below that, for a Tail-Drop queue, the steady state window size for any flow
is inverse proportional to the round-trip time.
In Tail Drop queues, buffer overflow causes (typically) at least one packet loss from each
flow passing through the queue ([35], [36]). Using the same notations as Sections II and III,
Taking expectation of both sides of (60), and denoting
proving the claim.
--R
Resource ReSerVation Protocol (RSVP)-Version 1 functional specification
RSVP: A new resource ReSerVation protocol.
Specification of controlled-load network element service
Specification of guaranteed quality of service.
The Use of RSVP with IETF Integrated Services.
An architecture for differentiated services.
An approach to service allocation in the Internet.
Definition of the differentiated services field (DS filed) in the IPv4 and IPv6 headers.
Recommendations on queue management and congestion avoidance in the Internet.
Random early detection gateways for congestion avoidance.
Explicit allocation of best-effort packet delivery service
Technical Specification from Cisco.
Modeling TCP Reno performance: A simple model and its empirical validation.
Comparative performance analysis of versions of TCP in a local network with a lossy link.
The stationary behavior of ideal TCP congestion avoidance.
The macroscopic behavior of the TCP congestion avoidance algorithm.
Promoting the use of end-to-end congestion control in the internet
Fixed point approximations for TCP behavior in an AQM network.
A framework for practical performance evaluation and traffic engineering in ip networks.
Recommendation on using the gentle variant of RED.
TCP/IP Illustrated
The NewReno Modification to TCP's Fast Recovery Algorithm.
Blocking probabilities in large circuit-switched networks
The performance of TCP/IP for networks with high bandwidth-delay products and random loss
Analytic understanding of RED gateways with multiple competing TCP flows.
Stochastic Models of Congestion Control in Heterogeneous Next Generation Packet Networks.
Flow Control in High-Speed Networks
Analysis of the increase and decrease algorithms for congestion avoidance in computer networks.
Numerical Recipes in C: The Art of Scientific Computing.
A proposal to add explicit congestion notification (ECN) to IP.
Oscillating behavior of network traffic: a case study simulation.
Congestion avoidance and control.
--TR
Congestion avoidance and control
Analysis of the increase and decrease algorithms for congestion avoidance in computer networks
Numerical recipes in C (2nd ed.)
TCP/IP illustrated (vol. 1)
Random early detection gateways for congestion avoidance
The performance of TCP/IP for networks with high bandwidth-delay products and random loss
The macroscopic behavior of the TCP congestion avoidance algorithm
Explicit allocation of best-effort packet delivery service
Comparative performance analysis of versions of TCP in a local network with a lossy link
Promoting the use of end-to-end congestion control in the Internet
Modeling TCP Reno performance
Fluid-based analysis of a network of AQM routers supporting TCP flows with an application to RED
Stochastic models of congestion control in heterogeneous next generation packet networks | differentiated services;performance analysis;network modeling;random early detection;UDP;TCP |
637151 | Massively parallel fault tolerant computations on syntactical patterns. | The general capabilities of reliable computations in linear cellular arrays are investigated in terms of syntactical pattern recognition. We consider defects of the processing elements themselves and defects of their communication links. In particular, a processing element (cell) is assumed to behave as follows. Dependent on the result of a self-diagnosis it stores its working state locally such that it becomes visible to the neighbors. A defective cell cannot modify information but is able to transmit it unchanged with unit speed. Cells with link failures are not able to receive information via at most one of their both links to adjacent cells. Moreover, static and dynamic defects are distinguished.It is shown that fault tolerant real-time recognition capabilities of two-way arrays with static defects are characterizable by intact one-way arrays and that one-way arrays are fault tolerant per se. For arrays with dynamic defects it is proved that all failures can be compensated as long as the number of adjacent defective cells is bounded.In case of arrays with link failures it is shown that the sets of patterns that are reliably recognizable are strictly in between the sets of (intact) one-way and (intact) two-way arrays. | Introduction
Nowadays it becomes possible to build massively parallel computing systems
that consist of hundred thousands of processing elements. Each single component
is subject to failure such that the probability of misoperations and loss
of function of the whole system increases with the number of its elements. It
was von Neumann [19] who rst stated the problem of building reliable systems
out of unreliable components. Biological systems may serve as good examples.
Due to the necessity to function normally even in case of certain failures of
their components nature developed mechanisms which invalids the errors, with
other words, they are working in some sense fault tolerant. Error detecting and
correcting components should not be global to the whole system because they
themselves are subject to failure. Therefore, the fault tolerance has to be a
design feature of the single elements.
A model for massively parallel, homogenously structured computers are the cellular
arrays. Such devices of interconnected parallel acting nite state machines
have been studied from various points of view.
In [4, 5] reliable arrays are constructed under the assumption that a cell (and
not its links) at each time step fails with a constant probability. Moreover, such
a failure does not incapacitate the cell permanently, but only violates its rule
of operation in the step when it occurs. Under the same constraint that cells
themselves (and not their links) fail (i.e. they cannot process information but are
still able to transmit it unchanged with unit speed) fault tolerant computations
have been investigated, e.g. in [6, 14] where encodings are established that allow
the correction of so-called K-separated misoperations, in [9, 10, 17, 20] where the
famous ring squad synchronization problem is considered in defective cellular
arrays, and in terms of interacting automata with nonuniform delay in [7, 11]
where the synchronization of the networks is the main object either.
Here we are interested in more general computations. In terms of pattern recognition
the general capabilities of reliable computations are considered. Since cellular
arrays have intensively been investigated from a language theoretic point
of view, pattern recognition (or language acceptance) establishes the connection
to the known results and, thus, inheres the possibility to compare the fault
tolerant capabilities to the non fault tolerant ones.
In the sequel we distinguish three dierent types of defects.
Static defects are the main object of Section 3. It is assumed that each cell
has a self-diagnosis circuit which is run once before the actual computation.
The results are stored locally in the cells and subsequently no new defects may
occur. Otherwise the whole computation would become invalid. A defective cell
cannot modify information but is able to transmit it with unit speed. Otherwise
the parallel computation would be broken into two non interacting parts and,
therefore, would become impossible at all.
In Section 4 the defects are generalized. In cellular arrays with dynamic defects
it may happen that a cell becomes defective at any time. The formalization of
the corresponding arrays includes also the possibility to repair a cell dynamically
The remaining sections concern another natural type of defects. Not the cells
themselves cause the misoperation but their communication links. It is assumed
that a defective cell is not able to receive information via at most one of its both
links to adjacent cells. The corresponding model is introduced in Section 5 in
more detail. In Section 6 it is shown that the real-time arrays with link failures
are able to reliably recognize a wider range of sets of patterns than intact one-way
arrays. In order to prove this result some auxiliary algorithmic subroutines
are given. Section 7 concludes the investigations by showing that the devices
with link failures are strictly weaker than two-way arrays. Hence, link failures
cannot be compensated in general but, on the other hand, do not decrease the
computing power to that one of one-way arrays.
In the following section we dene the basic notions and recall the underlying
intact cellular arrays and their mode of pattern recognition.
Preliminaries
We denote the integers by Z, the positive integers f1; 2; g by N and the
set N [ f0g by N 0 . X 1 X d denotes the Cartesian product of the sets
we use the notion X d
alternatively. We use for
inclusions and if the inclusion is strict. Let M be some set and
be a function, then we denote the i-fold composition of f by f [i] ,
A two-way resp. one-way cellular array is a linear array of identical nite state
machines, sometimes called cells, which are connected to their both nearest
neighbors resp. to their nearest neighbor to the right. The array is bounded by
cells in a distinguished so-called boundary state. For convenience we identify
the cells by positive integers. The state transition depends on the current state
of each cell and the current state(s) of its neighbor(s). The transition function
is applied to all cells synchronously at discrete time steps. Formally:
Denition 1 A two-way cellular array (CA) is a system hS; -; #; Ai, where
1. S is the nite, nonempty set of cell states,
2.
S is the boundary state,
3. A S is the set of input symbols,
is the local transition function.
If the
ow of information is restricted to one-way (i.e. from right to left) the
resulting device is a one-way cellular array (OCA) and the local transition
function maps from (S [ f#g) 2 to S.
A conguration of a cellular array at some time t 0 is a description of its
global state, which is actually a mapping c t
The data on which the cellular arrays operate are patterns built from input
symbols. Since here we are studying one-dimensional arrays only the input
data are nite strings (or words). The set of strings of length n built from
symbols from a set A is denoted by A n , the set of all such nite strings by A .
We denote the empty string by " and the reversal of a string w by w R . For its
length we write jwj. The set A + is dened to be A n f"g.
In the sequel we are interested in the subsets of strings that are recognizable by
cellular arrays. In order to establish the connection to formal language theory
we call such a subset a formal language. Moreover, sets L and L 0 are considered
to be equal if they dier at most by the empty word, i.e. L n
Now we are prepared to describe the computations of (O)CAs. The operation
starts in the so-called initial conguration c 0;w at time 0 where each symbol of
the input string is fed to one cell: c 0;w During
a computation the (O)CA steps through a sequence of congurations whereby
successor congurations are computed according to the global transition function
be a conguration, then its successor conguration is as
follows:
c t+1
c t+1
for CAs and
c t+1
for OCAs. Thus, is induced by -.
An input string w is recognized by an (O)CA if at some time i during its course
of computation the leftmost cell enters a nal state from the set of nal states
F S.
Denition be an (O)CA and F S be a set of nal
states.
1. An input w 2 A + is recognized by M at time step
the conguration c
2. recognizes w at some time stepg is the set of
strings (language) recognized by M.
3. mapping. If M can recognize all w 2 L(M)
within at most t(jwj) time steps, then M is said to be of time complexity t.
The family of all sets which are recognizable by some CA (OCA) with time
complexity t is denoted by L t (CA) (L t (OCA)). If t equals the identity function
recognition is said to be in real-time, and if t is equal to k id for an
arbitrary rational number k 1, then recognition is carried out in linear-time.
Correspondingly, we write L rt ((O)CA) and L lt ((O)CA). In the sequel we will
use corresponding notations for other types of recognizers.
Now we are going to explore some general recognition capabilities of CAs that
contain some defective cells. The defects are in some sense static [17]: It is
assumed that each cell has a self-diagnosis circuit which is run once before the
actual computation. The result of that diagnosis is stored in a special register
of each cell such that intact cells can detect defective neighbors. Moreover
(and this is the static part), it is assumed that during the actual computation
no new defects may occur. Otherwise the whole computation would become
invalid. What is the eect of a defective cell? It is reasonable to require that a
defective cell cannot modify information. On the other hand, it must be able to
transmit information in order to avoid the parallel computation being broken
into two not interacting lines and, thus, being impossible at all.
The speed of information transmission is one cell per time step. Another point
of view on such devices is to dene a transmission delay between every two
adjacent cells and to allow nonuniform delays [7, 11]. Now the number of
defective cells between two intact ones determine the corresponding delay.
Since the self-diagnosis is run before the actual computation we may assume
that defective cells do not fetch an input symbol. Nevertheless, real-time is the
minimal possible time needed for non-trivial computations and, consequently,
is dened to be the number of all cells in the array. In order to obtain a
computation result here we require the leftmost cell not to be defective. Later
on we can omit this assumption.
Formally we denote CAs with static defects by SD-CA and the corresponding
language families by L t (SD-CA).
Considering the general real-time recognition capabilities of SD-CAs the best
case is trivial. It occurs when all the cells are intact: The capabilities are those
of CAs. On the other hand, fault tolerant computations are concerned with
the worst case (with respect to our assumptions on the model). The next two
results show that in such cases the capabilities can be characterized by intact
OCAs from what follows that the bidirectionality of the information
ow gets
lost.
Theorem 3 If a set is fault tolerant real-time recognizable by a SD-CA, then
it is real-time recognizable by an OCA.
Proof. Let D be a SD-CA and let k 2 N be an arbitrary positive integer.
Set the number of cells of D to 1. For the mapping f
ng.
Now assume the cells at the positions f(i), 1 i k, are intact ones and all the
other cells are defective (cf. Figure 1). In between the cells f(i) and f(i
there are f(i) f(i
defective ones.
During a real-time computation the states of a cell f(i) at time t 2 i cannot
in
uence the overall computation result. The states would reach the leftmost
a
a 1 a
a 2 a 1 a
a 3
a 2
a 1
a 0
a 4 a 3 a 2 a 1 a
a 5 a 4 a 3 a 2 a 1 a 0 b 5 b 4 b 3
a 6
a 5
a 4
a 3
a 2
a 1
a 0
a 7 a 6 a 5 a 4 a 3 a 2 a 1 a 0 b 7
a 8 a 7 a 6 a 5 a 4 a 3 a 2 a 1
a 9
a 8
a 7
a 6
a 5
a 4
a 3
a a 9 a 8 a 7 a 6 a 5
a a 9 a 8 a 7
a 12 a 11 a 10 a 9
a 13
a 12
a 11
a 14 a 13
a 15
Figure
1: One-way information
ow in SD-CAs.
cell after another f(i)
steps. This gives the arrival time which is greater
than real-time.
Conversely, the cell f(i) computes all its states up to time t independently
of the states of its intact neighbors to the left: The nearest intact
neighbor to the left is cell f(i+1) and there are 2 i 1 defective cells in between
Up to now we have shown that the information
ow in D is one-way. But
compared to OCAs the cells in D are performing more state changes. It remains
to show that this does not lead to stronger capabilities.
Let i be some intact cell of D. As long as it operates independently on its
intact neighbors it runs through state cycles provided that the adjacent defective
regions are long enough. Let s be such a cycle.
Now one can always enlarge the lengths of the defective regions such that they
correspond to j
Therefore, during their isolated computations the cells run through complete
cycles. Obviously, such a behavior can be simulated by the cells of an OCA
since the cycle lengths are bounded by the number of states of D. 2
In order to obtain the characterization of real-time SD-CAs by real-time OCAs
we need the converse of Theorem 3.
Theorem 4 If a set is real-time recognizable by an OCA, then it is fault tolerant
real-time recognizable by a SD-CA.
Proof. The idea of the simulation is depicted in Figure 2. Each cell of a
SD-CA that simulates a given OCA waits for the rst information from its
right intact neighbor. The waiting period is signaled to its left intact neighbor
by signals labeled . This information leads to a waiting period of the left
intact neighbor. Each intact cell performs a simulation step when it receives a
non-waiting signal.
It follows that a cell sends exactly as many waiting signals to the left as are
defective cells located to its right. Therefore, the leftmost cell needs exactly
one simulation step for each intact cell and one waiting step for each defective
cell and, thus, computes the result in real-time. 2
a 0
a 0
a 0
a 0
a 0
a 0
a 0
a 0
a 1
a 1
a 1
a 2
a 3
a 4
Figure
2: OCA simulation by SD-CAs.
The following corollary formalizes the characterization:
Corollary 5 L rt
From the previous results follows the interesting fact that OCAs are per se fault
tolerant. Additional defective cells do not decrease the recognition capabilities.
Corollary 6 L rt
It is often useful to have examples for string sets not recognizable by a certain
device.
Example 7 Neither the set of duplicated strings fww nor the set
of strings whose lengths are a power of 2 fw are
fault tolerant real-time recognizable by SD-CAs.
(It has been shown in [13] resp. [15] that they do not belong to the family
The previous results imply another natural question. Is it possible to regain
the recognition power of two-way CAs in fault tolerant SD-CA computations
by increasing the computation time? How much additional time would be
necessary? In [3, 18] it has been shown that an OCA can simulate a real-time
computation in twice real-time if the input word is reversed. It is an open
problem whether or not L rt (CA) is closed under reversal. But nevertheless, a
piece of good news is that only one additional time step for each intact cell is
necessary in order to regain the computation power in a fault tolerant manner.
Theorem 8 A set is recognizable by an OCA in twice real-time (and, thus, its
reversal by an CA in real-time) if and only if it is fault tolerant recognizable by
a SD-CA in real-time+m, where m denotes the number of intact cells.
Proof. Let L be a set that is recognizable by an OCA in twice real-time and
denote the length of an input word by n. Let us consider the situation in the
proof of Theorem 4 at real-time: The corresponding SD-CA has simulated n
steps of the corresponding OCA. But due to the previous delay, from now on
no further delay of the intact cells is necessary. Thus, the next n steps of the
OCA can be simulated in n steps by the SD-CA either. Since the number of
intact cells is exactly n the only if part follows.
Now let L be a set that is recognizable by a SD-CA in time real-time+m. We
follow the same idea as in the proof of Theorem 3. The distance between two
intact cells has to be enlarged. If we number the intact cells from right to left
by then the distance between cells i and has to be enlarged at
least by 2 i 1 (m 1) in order to prevent additional
ow of information. 2
One assumption on our model has been an intact leftmost cell. Due to Corollary
5 we can omit this requirement. Now the overall computation result is
indicated by the leftmost intact cell of the one-way array which operates per se
independently on its defective left neighbors.
In the following cellular arrays with dynamic defects (DD-CA) are introduced.
Dynamic defects can be seen as generalization of static defects. Now it becomes
possible that cells fail at any time during the computation. Afterwards they
behave as in the case of static defects.
In order to dene DD-CAs more formally it is helpful to suppose that the
state of a defective cell is a pair of states of an intact one. One component
represents the information that is transmitted to the left and the other one the
information that is transmitted to the right. By this formalization we obtain
the type indication of the cells (defective or not) for free: Defective cells are
always in states from S 2 and intact ones in states from S. A possible failure
implies a weak kind of nondeterminism for the local transition function.
Denition 9 A two-way cellular array with dynamic defects (DD-CA) is a
system hS; -; #; Ai, where
1. S is the nite, nonempty set of cell states which satises S \
2.
S is the boundary state,
3. A S is the set of input symbols,
Sg is the local transition
function which satises -(s
If a cell works ne the local transition function maps to a state from S. Otherwise
it maps to a pair from S 2 indicating that the cell is now defective. The
denition includes the possibility to repair a cell during the computation. In
this case - would map from a pair to a state from S. Note that the nondeterminism
in a real computation is a determinism since the failure or repair of a
cell is in some sense under the control of the outside world.
We assume that initially all cells are intact and as in the static case that the
leftmost cell remains intact.
In the sequel we call an adjacent subarray of defective cells a defective region.
The next results show that dynamic defects can be compensated as long as the
lengths of defective regions are bounded.
Theorem 10 If a set is real-time recognizable by a CA, then it its real-time
recognizable by a DD-CA if the lengths of its defective regions are bounded by
some k 2 N 0 .
Proof. Assume for a moment that the lengths of the defective regions are
exactly k. A DD-CA D that simulates a given CA hS; -; #; Ai has the state set
The general idea of the proof is depicted in Figure 3. As long as a cell does
not detect a defective neighbor it stores the states of its neighbors and its own
state in some of its additional registers as shown in the gure.
time t the state of cell i might be as follows:
center
Assume now that the right neighbor of cell i becomes defective. Due to our
assumption we know that there must exist a defective region of length k at the
right of cell i. During the next k time steps cell i stores the received states and
computes missing states from its register contents as shown in Figure 3.
Subsequently its state might be as follows
| {z }
center
a
a 0
a 1
a 0
a 0
a
a t
a
c
a
d
e
c
d
e
a t
a t+1
a t
d
c
a t
c t+1
e
c
d
e
e t+1
d t+1
d t+2
c t+2
c t+3
a t+3
e t+2
d t+3
c t+4
Figure
3: Compensation of
From now on cell i receives the states that the intact cell
in at time t; t is able to compute the necessary intermediate states
from its register contents.
A crucial point is that the lengths of defective regions are xed to k. Due to
that assumption a cell i knows when it receives the valid states from its next
intact neighbor i+k+1 or i k 1. We can relax the assumption as required to
lengths of at most k cells by the following extension of the simulation. Each cell
is equipped with a modulo k counter. Since the current value of the counter is
part of the cell state it is also part of the transmitted information. A cell that
stores received information in its additional registers stores also the received
counter value. Now it can decide whether it receives the valid state from its
next intact neighbor by comparing the received counter value to the latest
stored counter value. If they are equal then the received information is from a
defective cell, otherwise it is valid and the cell uses no more additional registers.
New failures in subsequent time steps can be detected by the same method. If
the received counter value is equal to the latest stored counter value then additional
cells have become defective. In such cases the cell uses correspondingly
more additional registers in order to compensate the new defects.
It remains to explain what happens if two defective regions are joint by failure of
a single connecting intact cell. Up to now we have used the transmitted contents
of the main registers only. But actually the whole state, i.e. all register contents,
are transmitted. In the case in question the next intact cells to the left and
right of the joint defective region can ll additional registers as desired. 2
Corollary 11 If a set is real-time recognizable by an OCA, then it is real-time
recognizable by a DD-OCA if the lengths of its defective regions are bounded
by some k 2 N 0 .
In order to provide evidence for general fault tolerant DD-CA computations we
have to relax the assumption of bounded defective region lengths. We are again
concerned with the worst case. The hardest scenario is as follows. Initially all
cells are intact and thus fetching an input symbol. During the rst time step
all but the leftmost cell fail. (Needless to say, if the leftmost cell becomes also
defective then nobody would expect a reasonable computation result.)
It is easy to see that in such cases the recognition capabilities of DD-CAs are
those of a single cell, a nite state machine (see Figure 4).
Lemma 12 If a set is fault tolerant recognizable by a DD-CA, then it is recognizable
by a nite state machine and thus regular.
Corollary 13 If a set is fault tolerant recognizable by a DD-OCA, then it is
recognizable by a nite state machine and thus regular.
5 Devices with Link Failures
In Section 3 it has been shown for CAs with defective cells that in case of large
adjacent defective regions the bidirectional information
ow gets lost. This
means that the fault tolerant computation capabilities of two-way arrays are
those of one-way arrays. The observation gives rise to investigate cellular arrays
with defective links for their own. In order to explore the corresponding general
a
a 1
a
a
a
a 5 g 0
a 6
a 7 #
a 8
Figure
4: Worst case DD-CA computation.
reliable recognition capabilities we have to take a closer look on the device in
question.
Again we assume that each cell has a self-diagnosis circuit for its links which is
run once before the actual computation, and again the result of that diagnosis
is indicated by the states of the cells such that intact cells can detect defective
neighbors. What is the eect of a defective link? Suppose that each two
adjacent cells are interconnected by two unidirectional links. On one link information
is transmitted from right to left and on the other one from left to right.
both links are failing, then the parallel computation would be broken
into two not interacting lines and, thus, would be impossible at all. Therefore,
it is reasonable to require that at least one of the links between two cells does
not fail.
Suppose for a moment that there exists a cell that, due to a link failure, cannot
receive information from its right neighbor. This would imply that the overall
computation result (indicated by the leftmost cell) is obtained with no regard
to the input data to the right of that defective cell. So all reliable computations
would be trivial. In order to avoid this problem we extend the hardware such
that if a cell detects a right to left link failure it is able to reverse the direction
of the other (intact) link. Thereby we are always concerned with defective links
that cannot transmit information from left to right.
Another point of view on such devices is that some of the cells of a two-way
array behave like cells of a one-way array. Sometimes in the sequel we will call
them OCA-cells.
The result of the self-diagnosis is indicated by the states of the cells. Therefore
we have a partitioned state set.
Denition 14 A cellular array with defective links (mO-CA) is a system
is the partitioned, nite, nonempty set of cell states satisfying
2.
S is the boundary state,
3. A S i is the set of input symbols,
4. m 2 N 0 is an upper bound for the number of link failures,
5. is the local transition function for intact cells,
is the local transition function for defective cells.
A reliable recognition process has to compute the correct result for all distributions
of the at most m defective links. In advance it is, of course, not known
which of the links will fail. Therefore, for mO-CAs we have a set of admissible
start congurations as follows.
For an input string the conguration c 0;w is an admissible
start conguration of an mO-CA if there exists a set D ng of defective
cells, jDj m, such that c 0;w ng n D and c 0;w
For a clear understanding we dene the global transition function of mO-CAs
as follows: Let c t , t 0, be a conguration of an mO-CA with defective cells
D, then its successor conguration is as follows:
c t+1
c t+1
c t+1
c t+1
c t+1
c t+1
Due to our denition of - i and - d once the computation has started the set D
remains xed, what meets the requirements of our model.
In the following we are going to answer the following questions. Do some
link failures reduce the recognition power of intact CAs or is it possible to
compensate the defects by modications of the transition function as shown for
CAs with defective cells? Can mO-CAs recognize a wider range of string sets
than intact OCAs?
6 mO-CAs are Better than OCAs
The inclusions L rt (OCA) L rt (mO-CA) L rt (CA) are following immediately
from the denitions. Our aim is to prove that both inclusions are strict.
6.1 Subroutines
In order to prove that real-time mO-CAs are more powerful than real-time
OCAs we need some results concerning CAs and OCAs which will later on
serve as subroutines of the general construction.
6.1.1 Time Constructors
A strictly increasing mapping f : N ! N is said to be time constructible if there
exists a CA such that for an arbitrary initial conguration the leftmost cell
enters a nal state at and only at time steps f(j), 1 j n. A corresponding
CA is called a time constructor for f . It is therefore able to distinguish the
time steps f(j).
The following lemma has been shown in [1].
Lemma 15 The mapping
Proof. The idea of the proof is depicted in Figure 5. At initial time the
leftmost cell of a CA sends a signal with speed 1=3 to the right. At the next
time step a second signal is established that runs with speed 1 and bounces
between the slow signal and the left border cell. The leftmost cell enters a
nal state at every time step it receives the fast signal. The correctness of the
construction is easily seen by induction. 2
Figure
5: A time constructor for 2 n .
A general investigation of time constructible functions can be found, e.g. in
[12, 2]. Especially, there exists a time constructor for 2 n that works with an
area of at most n cells.
Actually, we will need a time constructor for the mapping 2 2 n
. Fortunately, in
[12] the closure of these functions under composition has been shown.
Corollary 16 The mapping
There
exists a corresponding time constructor that works with an area of at most 2 n
cells.
6.1.2 Binary OCA-Counters
Here we need to set up some adjacent cells of an OCA as a binary counter.
Actually, we are not interested in the value of the counter but in the time
step at which it over
ows. Due to the information
ow the rightmost cell of
the counter has to contain the least signicant bit. Assume that this cell can
identify itself. In order to realize such a simple counter every cell has three
registers (cf. Figure 6). The third ones are working modulo 2. The second ones
are signaling a carry-over to the left neighbor and the rst ones are indicating
whether the corresponding cell has generated no carry-over (0), one carry-over
(1) or more than one carry-over (2) before. Now the whole counter can be tested
by a leftmoving signal. If on its travel through the counter all the rst registers
are containing 0 and additionally both carry-over registers of the leftmost cell
are containing 1, then it recognizes the desired time step. Observe that we need
the second carry-over register in order to check that the counter produces an
over
ow for the rst time.
Figure
binary OCA-counter.
Figure
7: A binary CA-shift-right-
counter.
6.1.3 Binary CA-Shift-Right Counters
For this type of counter we need two-way information
ow. It is set up in a
single (the leftmost) cell of a CA. Since we require the least signicant bit to
be again the rightmost bit in the counter we have to extend the counter every
time it produces an over
ow. The principle is depicted in Figure 7.
Each cell has two registers. One for the corresponding digit and the other
one for the indication of a carry-over. Due to the two-way information
ow
the leftmost cell can identify itself. Every time it generates a carry-over the
counter has to be extended. For this purpose the leftmost cell simulates an
additional cell to its left appropriately. This fact signals its right neighbor the
need to extend the counter by one cell. The right neighbor reacts by simulating
in addition the old process of the leftmost cell which now computes the new
most signicant bit. After the arrival of this extension signal at the rightmost
cell of the counter the extension is physically performed by the rst cell at the
right of the counting cells which now computes the least signicant bit.
Obviously, it can be checked again by a leftmoving signal whether the counter
represents a power of 2 or not.
6.2 Proof of the Strictness of the Inclusion
Now we are prepared to prove the main result of this section.
Let a set of strings L be dened as follows:
The easy part is to show that L does not belong to L rt (OCA).
Lemma
Proof. In [8] it has been shown that for a mapping f with the
property
lim
the set of strings fb n a f(n) j n 2 Ng does not belong to L rt (OCA). Applying
the result to L we obtain
lim
and therefore
It remains to show that L is real-time recognizable by some mO-CA.
Theorem
Proof. In the following a real-time 1O-CA M that recognizes L is constructed.
On input data b n a m we are concerned with three possible positions of the unique
defective cell:
1. The position is within the b-cells.
2. The position is within the leftmost 2 n a-cells.
3. The position is at the right hand side of the 2 n th a-cell.
At the beginning of the computation M starts the following tasks in parallel on
some tracks: The unique defective cell establishes a time constructor M 1 for
if it is an a-cell. The leftmost a-cell establishes another time constructor
and, additionally, a binary shift-right counter C 1 that counts the
number of a's. The rightmost b-cell starts a binary OCA-counter C 2 and, nally,
the rightmost a-cell sends a stop signal with speed 1 to the left.
According to the three positions the following three processes are superimposed.
Case 3. (cf. Figure 8) The shift-right counter is increased by 1 at every time
step until the stop signal arrives. Each over
ow causes an incrementation (by
1) of the counter C 2 . Let i be the time step at which the stop signal arrives
at the shift-right counter C 1 and let l be the number of digits of C 1 . During
the next l time steps the signal travels through the counter and tests whether
its value is a power of 2, from which Subsequently, the signal
tests during another n time steps whether the value of the binary counter C 2 is
exactly 2 n , from which l = 2 n follows. If the tests are successful the input is
accepted because the input string is of the form b n a l a
a 2 l
and, thus, belongs to L.
OCA
b b a a a a a a a a a a a a a a a a a a a a
overflow
Figure
8: Example for case 3,
Case 2. (cf. Figure 9) In this case the space between the b's and the defective
cell is too small for setting up an appropriate counter as shown for Case 3.
Here a second binary counter C 3 within the b-cells is used. It is increased by
1 at every time step until it receives a signal from the defective cell and, thus,
contains the number of cells between the b-cells and the defective cell. Its value
x is conserved on an additional track. Moreover, at every time step at which
the time constructor M 1 marks the defective cell to be nal, a signal is sent
to the b-cells that causes them to reset the counter to the value x by copying
the conserved value back to the counter track. After the reset the counter is
increased by 1 at every time step. Each reset signal also marks an unmarked
b-cell.
The input is accepted if exactly at the arrival of the stop signal at the leftmost
OCA
b b a a a a a
a a a a a a a a a a a a a a a
overflow
f
f
f
f
f
f
f
overflow
Figure
9: Example for case 2,
cell the counter over
ows for the rst time and all b-cells are marked: Let the
last marking of M 1 happen at time
for some r 2 N. The corresponding
leftmoving signal arrives at time 2 2 r
x at the b-cells and resets the counter
C 3 to x. The stop signal arrives at time 2 2 r
N, at the
counter that has now the value x+ s. Since the counter produces an over
ow it
holds . Moreover, since M 1 has sent exactly r marking signals and
all b-cells are marked it follows Therefore, the stop signal arrives at the
rightmost b-cell at time 2 2 r
and the input belongs to L.
Case 1. Since the binary counter M 3 within the b-cells is an OCA-counter it
works ne even if the defective cell is located within the b-cells. Case 1 is a
straightforward adaption of case 2 (here M 2 is used instead of M 1 ). 2
Corollary 19 L rt (OCA) L rt (1O-CA)
This result can be generalized to devices with a bounded number of defective
cells as follows.
Theorem 20 Let m 2 N be some constant then L 2 L rt (mO-CA).
Proof. If all defective cells are located within the b-cells, or within the rst
or within the last 2 2 n
a-cells, then the proof follows from the three
cases in the proof of Theorem 18. Otherwise we are concerned with two more
cases:
Case 4: The leftmost defective a-cell is not within the rst 1
a-cells. By
standard compression techniques we can simulate m cells by one. Now the
construction of Case 3 in the proof of Theorem solves this case.
Case 5: The leftmost defective a-cell is within the rst 1
a-cells. In order
to adapt Case 2 of the previous proof due to Corollary 16 we may assume the
time constructor for 2 n works in at most n cells. By standard compression
techniques we obtain at most 1
cells.
According to our assumption not all defective cells are within the rst 2 n a-
cells. So we have two defective cells with a distance of more than 1
cells. This allows us to use the compressed time constructor in that area. The
remaining construction has been shown in Case 2.
At the very beginning of the computation it is not know which area of the
possible m areas is the related one. In order to determine it we start time
constructors at each defective cell. Now we are able to choose a correct track
of a successful computation and the set is recognized if one of the m time
constructors recognizes the input. 2
Corollary be some constant then L rt (OCA) L rt (mO-CA).
7 CAs are Better than mO-CAs
In order to complete the comparisons we have to prove that the computational
power of real-time mO-CAs is strictly weaker than those of CAs. For this
purpose we can adapt a method developed in [16] for proving that certain
string sets do not belong to L rt (OCA). The basic idea in [16] is to dene an
equivalence relation on string sets and bound the number of distinguishable
equivalence classes of real-time OCA computations.
be an OCA and X;Y A . Two strings w; w are
dened to be (M;X;Y )-equivalent i for all x 2 X and y 2 Y the leftmost
states of the congurations [jwj] are equal
(cf.
Figure
10).
OCA
Figure
10: Principle of bounding real-time equivalence classes.
The observation is that the essential point of the upper bound on equivalence
classes is due to the fact that the input sequences x and y are computational
unrelated. Therefore, we can assume that the cell obtaining the rst symbol of
w resp. of w 0 as input is defective and so adapt the results in [16] to 1O-CAs
immediately:
Corollary be some constant, then
Corollary
Finally, it follows for a constant m 2 N:
--R
Some relations between massively parallel arrays
Fault tolerant cellular automata
The synchronization of nonuniform networks of
Pushdown cellular automata
Minimal time synchronization in restricted defective cellular automata
Synchronization of a line of
Signals in one dimensional cellular automata
Fault tolerant cellular spaces
Language recognition and the synchronization of cellular au- tomata
Language not recognizable in real time by one-way cellular automata
A fault-tolerant scheme for optimum-time ring squad synchro- nization
Deterministic one-way simulation of two-way real-time cellular automata and its related problems
Probabilistic logics and the synthesis of reliable organisms from unreliable components
--TR
On real-time cellular automata and trellis automata
Reliable computation with cellular automata
A simple three-dimensional real-time reliable cellular array
Minimal time synchronization in restricted defective cellular automata
The synchronization of nonuniform networks of finite automata
Language not recognizable in real time by one-way cellular automata
Some relations between massively parallel arrays
Pushdown cellular automata
Signals in one-dimensional cellular automata
Real-Time Language Recognition by One-Way and Two-Way Cellular Automata | static and dynamic defects;syntactical pattern recognition;fault-tolerance;link failures;cellular arrays |
637236 | A case study of OSPF behavior in a large enterprise network. | Open Shortest Path First (OSPF) is widely deployed in IP networks to manage intra-domain routing. OSPF is a link-state protocol, in which routers reliably flood "Link State Advertisements" (LSAs), enabling each to build a consistent, global view of the routing topology. Reliable performance hinges on routing stability, yet the behavior of large operational OSPF networks is not well understood. In this paper, we provide a case study on the eharacteristics and dynamics of LSA traffic for a large enterprise network. This network consists of several hundred routers, distributed in tens of OSPF areas, and connected by LANs and private lines. For this network, we focus on LSA traffic and analyze: (a) the class of LSAs triggered by OSPF's soft-state refresh, (b) the class of LSAs triggered by events that change the status of the network, and (c) a class of "duplicate" LSAs received due to redundancy in OSPF's reliable LSA flooding mechanism. We derive the baseline rate of refresh-triggered LSAs automatically from network configuration information. We also investigate finer time scale statistical properties of this traffic, including burstiness, periodicity, and synchronization. We discuss root causes of event-triggered and duplicate LSA traffic, as well as steps identified to reduce this traffic (e.g., localizing a failing router or changing the OSPF configuration). | INTRODUCTION
Operational network performance assurances hinge on
the stability and performance of the routing system. Understanding
behavior of routing protocols is crucial for better
operation and management of IP networks. In this pa-
per, we focus on Open Shortest Path First (OSPF) [1], a
widely deployed Interior Gateway Protocol (IGP) in IP
Aman Shaikh is at the University of California, Santa Cruz, CA
95064. E-mail: [email protected]
Chris Isett is with Siemens Medical Solutions, Malvern, PA 19355.
E-mail: [email protected]
Albert Greenberg, Matthew Roughan and Joel Gottlieb are with
falbert,roughan,[email protected]
networks today to control intradomain routing. Despite
wide-spread use, behavior of OSPF in large and commercial
IP networks is not well understood. In this paper, we
provide a case study of the dynamic behavior of OSPF in
a large enterprise IP network, using data gathered from the
deployment of a novel and passive OSPF monitoring sys-
tem. To our knowledge, this case study represents the first
detailed report on OSPF dynamics in any large operational
IP network.
OSPF is a link-state protocol, where each router generates
"Link State Advertisements" (LSAs) to create and
maintain a local, consistent view of the topology of the entire
routing domain. Tasks related to generating and processing
LSA traffic form a major chunk of OSPF process-
ing. In fact, OSPF LSA storms that cripple the network
are not unheard of [2]. Therefore, understanding the dynamics
of LSA traffic are vital to manage OSPF networks.
Such an understanding can also lead to realistic workload
models which can be used for a variety of purposes like
realistic simulations and scalability studies. Therefore, we
focus on the LSA traffic in this case study. Specifically,
we introduce a general methodology and associated predictive
model to investigate what the LSA traffic reveals
about network topology dynamics and failure modes.
The enterprise network under investigation provides
highly available and reliable connectivity from customer's
facilities to applications and databases residing in a data
center. Salient features of the network are:
ffl OSPF is used for routing in the data center. The OSPF
domain consists of about 15 areas and 500 routers. This
paper presents dynamics of OSPF for 8 areas (including
the backbone area) covering about 250 routers over a one
month period of April, 2002.
ffl The OSPF domain has a hierarchical structure with application
and database servers at the root and customers at
the leaves. The domain uses Ethernet LANs extensively
for connectivity. This is in contrast to ISP networks which
rely on point to point link technologies.
ffl Customers are connected over leased lines to the OSPF
network in the data center. EIGRP [1] is run over the
leased lines. Customer reachability information learnt via
EIGRP is subsequently imported into the OSPF domain.
This is in contrast to many ISP networks which propagate
external reachability information using an internal instance
of BGP (I-BGP [1]).
We believe the salient characteristics of the enterprise net-work
are common to a wide class of networks.
To understand characteristic of LSA traffic of the enterprise
network, we classify the traffic into three classes:
Refresh-LSAs - the class of LSAs triggered by OSPF's
soft-state refresh mechanism,
ffl Change-LSAs - the class of LSAs triggered by events
that change the status of the network, and
ffl Duplicate-LSAs - the class of extra copies of LSAs received
as a result of the redundancy in OSPF's reliable
LSA flooding mechanism.
In Section V, we provide a simple formula to predict
the rate of refresh-LSA traffic, with parameters that can be
determined using information available in the router configuration
files. Our measurements confirm that the prediction
is accurate. To understand finer grained refresh traffic
characteristics, we propose and carry out simple time-series
analysis. In the case study, this analysis revealed
that the routers fall into two classes with different periodic
refresh behavior. As it turned out, the two classes
ran two versions of the router operating systems (Cisco
IOS). Our measurements showed that refresh traffic is not
synchronized across routers. In contrast, Basu and Riecke
[3] reported evidence of synchronization from their OSPF
model simulations. We believe that day to day variations
in the operational context tends to break synchronization
arising from initial conditions. We saw no evidence of
forcing functions (however that push the network
towards synchronization.
Having baselined the refresh-LSA traffic, we move on
to analysis of traffic triggered by topology changes in
Section VI. We isolate change-LSAs and attribute them
to either internal or external topology changes. Internal
changes are changes to the topology of the OSPF domain,
whereas external changes are changes in the reachability
information imported from EIGRP. We found that the
bulk of change-LSAs were due to external changes. In
addition, the overwhelming majority of change-LSA traffic
came from persistent yet partial failure modes. Internal
change-LSAs arose from failure modes within a single
router. Bulk of External change-LSAs arose from a single
EIGRP session which was flapping due to congestion on
the link.
Interestingly, in one critical internal router failure case,
an impending failure eluded the SNMP based fault and
performance management system, but showed up prominently
in spikes in change-LSA traffic. As a result of
these LSA measurements, proactive maintenance was carried
out, moving the network away from an operating point
where an additional router failure would have had catas-
trophic, network-wide impact.
Because OSPF uses reliable flooding to disseminate
LSAs, a certain level of duplicate-LSA traffic is to be ex-
pected. However, in the case study we observed certain
asymmetries in duplicate-LSA traffic that were initially
surprising, given the complete symmetry of the physical
network design (Section VII). However, a closer look
revealed asymmetries in the logical OSPF control plane
topology. This analysis then led to a method for reducing
duplicate-LSA traffic by altering the routers' logical OSPF
configurations, without changing physical structure of the
network.
A. Related Work
For the most part, previous studies of OSPF have been
model or simulation-based [3] [4], or have concentrated
on measuring OSPF implementation behavior on a single
router or in a small testbed [5]. The only exception is a
paper by Labovitz et al. [6] in which the authors analyzed
OSPF instability for a regional ISP network. However, our
work is a first comprehensive analysis of OSPF LSA traffic
and can lead to development of realistic network-wide
modeling parameters and simulation scenarios of greatest
interest. Very interesting work related to IS-IS [1] convergence
in ISP networks (and the potential for much faster
convergence) has appeared in talks and Internet drafts from
Packet Design [7] [8]. In the realm of interdomain routing,
numerous studies have been published about the behavior
of BGP in the Internet; some examples of which are [6]
[9] [10]. These studies have yielded many interesting and
important insights. IGPs, such as OSPF, need similar at-
tention, and we believe that this paper is a first step in that
direction.
II. OSPF FUNDAMENTALS AND LSAS
OSPF is a link state routing protocol, meaning that each
router within the domain discovers and builds an entire
view of the network topology. This topology view is conceptually
a directed graph. Each router represents a node
in this topology graph, and each link between neighboring
routers represents a unidirectional edge. Each link also
has an associated weight that is administratively assigned
in the configuration file of the router. Using the weighted
topology graph, each router computes a shortest path tree
with itself as the root, and applies the results to build its
forwarding table. This assures that packets are forwarded
along the shortest paths in terms of link weights to their
destinations [11]. We will refer to the computation of the
shortest path tree as an SPF computation, and the resultant
tree as an SPF tree.
For scalability, an OSPF domain may be divided into
Y
I
F
A
A
CBorder router
AS border router112
Area 2115
I
F
OSPF domain
G
I
F
A
Fig. 1. From left to right, the figure depicts an example OSPF topology, the view of that topology from router G, and the shortest
path tree calculated at G. (Though we show the OSPF topology as an undirected graph here for simplicity, in reality the graph
is directed.)
areas determining a two level hierarchy as shown in Figure
1. Area 0, known as the backbone area, resides at the
top level of the hierarchy and provides connectivity to the
non-backbone areas (numbered 1, 2, . OSPF assigns
each link to exactly one area. The routers that have links
to multiple areas are called border routers. For example,
routers C , D and G are border routers in Figure 1. Every
router maintains a separate copy of the topology graph
for each area it is connected to. The router performs the
SPF computation on each such topology graph and thereby
learns how to reach nodes in all the areas it is connected
to. In general, a router does not learn the entire topology
of remote areas (i.e., the areas in which the router does not
have links), but instead learns the weight of the shortest
paths from one or more border routers to each node in remote
areas. Thus, after computing the SPF tree for each
area, the router learns which border router to use as an
intermediate node for reaching each remote node. In ad-
dition, the reachability of external IP prefixes (associated
with nodes outside the OSPF domain) can be injected into
OSPF (X and Y in Figure 1). Roughly, reachability to an
external prefix is determined as if the prefix were a node
linked to the router that injects the prefix into OSPF.
A. Link State Advertisements (LSAs)
Routers running OSPF describe their local connectivity
in Link State Advertisements (LSAs). These LSAs are
flooded reliably to other routers in the network, which the
routers use to build the consistent view of the topology described
earlier. Flooding is made reliable by mandating
that a router acknowledge the receipt of every LSA it receives
from every neighbor. The flooding is hop-by-hop
and hence does not itself depend on routing. The set of
LSAs in a router's memory is called the link state database
and conceptually forms the topology graph for the router.
It is worth noting that the term LSA is commonly used
to describe both OSPF messages and entries in the link
state database. An LSA has essentially two parts: (a) an
identifier - three parameters that uniquely define a topological
element (e.g. a link or a network), and (b) the rest
of the contents, describing the status of this topological element
OSPF uses several types of LSAs for describing different
parts of topology. Every router describes links to all
neighboring routers in a given area in a Router LSA. Router
LSAs are flooded only within an area and thus are said to
have an area-level flooding scope. Thus, a border router
has to originate a separate router LSA for every area it is
connected to. For example, router G in Figure 1 describes
its links to E and F in its area 0 router LSA, and its links
to H and I in area 2 router LSA. OSPF uses a Network
LSA for describing routers attached to a broadcast network
(e.g., Ethernet LANs). These LSAs also have an area-level
flooding scope. Section II-B describes OSPF operation in
broadcast networks in more detail. Border routers summarize
information about one area into another by originating
Summary
LSAs. It is through summary LSAs that other
routers learn about nodes in the remote areas. For example,
Information Flooding Scope
Router The router's OSPF links belonging to the area Area
Network The routers attached to the broadcast network Area
Summary
The nodes in remote areas reachable from the border router Area
External The external prefixes reachable from the ASBR Domain
I
LSA TAXONOMY
router G in Figure 1 learns about A and B through summary
LSAs originated by C and D. Summary LSAs have
area-level flooding scope. As mentioned earlier, OSPF allows
routing information to be imported from other routing
protocols, e.g., RIP, EIGRP or BGP. The router that imports
routing information from other protocols into OSPF
is called an AS Border Router (ASBR). An ASBR originates
external LSAs to describe external routing infor-
mation. In Figure 1 all the routers learn about X and Y
through external LSAs originated by ASBR A. External
LSAs are flooded in the entire domain irrespective of area
boundaries, and hence have domain-level flooding scope.
Table
I summarizes this taxonomy of OSPF's LSAs.
A change in the network topology requires affected
routers to originate and flood appropriate LSAs. For in-
stance, when a link between two routers comes up, the two
ends have to originate and flood their router LSAs with
the new link included in it. Moreover, OSPF employs periodic
refresh of LSAs. So, even in the absence of any
topological changes every router has to periodically flood
self-originated LSAs. The default value of the refresh-
period is minutes. The refresh mechanism is jittered
and driven by timer expiration. Due to reliable flooding
of LSAs, a router can receive multiple copies of a change
or refresh triggered LSA. We term the first copy received
at a router as new and copies subsequently received as du-
plicates. Note that LSA types introduced in Table I are
orthogonal to refresh or change triggered LSA, and new
versus duplicate instances of an LSA.
B. OSPF Operation over a Broadcast Network
As noted in the introduction, the enterprise network
makes extensive use of Ethernet LANs which provide
broadcast capability. OSPF represents such broadcast networks
via a hub-and-spoke topology. One router is elected
as the Designated Router (DR). The DR originates a net-work
LSA representing the hub, describing links (repre-
senting the spokes) to the other routers attached to the
broadcast network. To provide additional resilience, the
routers also elect a Backup Designated Router (BDR),
which becomes the new DR if the DR fails. OSPF flooding
over a broadcast network is a two step process:
1. A router attached to the network sends an LSA only to
the DR by sending it to a special multicast group DR-Rtrs.
Only the DR and the BDR listen to this group.
2. The DR in turn floods the LSA back to other routers
on the network by sending it to another special multicast
All-Rtrs. All the routers on the network listen to
this group.
The BDR participates in the DR-Rtrs group so that it can
remain in sync with DR. However, the BDR does not flood
an LSA to All-Rtrs unless the DR fails to do so.
III. ENTERPRISE NETWORK AND ITS
INSTRUMENTATION
In this Section, we first describe the OSPF topology
of the enterprise network used for our case study. We
then describe the OSPF monitoring system we deployed in
that network, for collecting LSAs and providing real-time
monitoring of the OSPF network.
A. Enterprise Network Topology
The enterprise network provides highly available and reliable
("always on") connectivity from customer's facilities
to applications and databases residing in a data center
(see
Figure
2). The network has been designed to provide
a high degree of reliability and fault-tolerance. Customer-
premise routers are connected to the data center routers
via leased lines. An instance of EIGRP runs between the
endpoints of each leased line. The routers in the data center
form an OSPF domain which is the focus of this pa-
per. Customer reachability information learnt via EIGRP
is imported as external LSAs into the OSPF domain. The
domain consists of Cisco routers and switches. For scal-
ability, the OSPF domain is divided into about 15 areas
forming a hub-and-spoke topology. Servers hosting applications
and databases are connected to area 0 (the back-bone
area) whereas customers are connected to routers in
non-backbone areas.
Certain details of the topology of non-backbone areas
are relevant to our analysis. Figure 3 shows the topology of
a non-backbone area. Two routers - termed B1 and B2
are connected to all areas (the backbone area and every
non-backbone area), and serve as OSPF border routers.
Each non-backbone area has up to 50 routers. As shown
in the figure, each area consists of two Ethernet LANs.
All the routers of the area are connected to these LANs.
Routers B1 and B2 have connections to both LANs and
provide the interconnection between the two LANs. Other
routers of the area are connected to exactly one of the two
LANs.
OSPF Domain
Data Center
Customer-site
Applications
Customer-site
Servers
Customer-site
Customer-site
Customer-site
Fig. 2. Enterprise network topology.
Other routers
Area A
Customer-premise router
Other routers
External (EIGRP)
Border rtr Border rtr
R
R'
Fig. 3. Structure of a non-backbone OSPF area. All the areas
are connected via two border routers B1 and B2.
Since customer-premise routers (e.g., R 0 in Figure 3) are
not part of the OSPF domain, and all data center routers
are not part of EIGRP domain, routes from one protocol
are injected into the other for ensuring connectivity. Thus,
router R of area A in Figure 3 which is connected to a customer
router R 0 , injects EIGRP routes into OSPF as external
LSAs. Route injection into OSPF is carefully controlled
through configuration.
B. OSPF Monitoring
The architecture of the OSPF monitor consists of two
basic components: LSARs (LSA Reflectors) and LSAGs
(LSA aGgregators) [4]. By design, LSARs are extremely
simple devices that connect directly to the network and
capture OSPF LSAs, and "reflect" them to LSAG for further
processing. In the case study here, the LSARs connect
to LANs and join the appropriate multicast groups
to receive LSAs. At least one LSAR was connected to
each area under study. In a point to point deployment,
the LSARs form "partial adjacencies". These adjacencies
fall short of full OSPF adjacencies but are sufficient to receive
OSPF traffic. LSARs speak enough OSPF to capture
OSPF LSA traffic. However, the design rules out the possibility
of the LSAR itself getting advertised for potential
use for routing regular traffic.
All code complexity is concentrated in the LSAGs. In
the case study, we deployed a single LSAG in the network.
The LSARs reliably feed the LSAs to the LSAG, which
aggregates and analyzes the LSA stream to provide real-time
monitoring and fault management capability.
For lack of space in this paper, we do not go in further
details of the monitoring system architecture.
We deployed three LSARs and one LSAG, running on
four Linux servers. Each LSAR has a number of interfaces
connected to different areas. LSARs currently monitor
area zero and seven non-backbone areas, covering a
total of about 250 routers. The LSARs are connected to
LANs and configured to monitor LSAs sent to the multicast
group All-Rtrs. One advantage of this approach is
that LSAR does not have to establish adjacencies with any
routers, and remains completely passive and invisible to
the OSPF domain. Since LSAR listens to group All-Rtrs,
LSA traffic seen by it is essentially identical to that seen
by a regular (i.e., non-DR, non-BDR) router on the LAN.
IV. RESULTS
We carried out the following steps to analyze the LSA
traffic:
ffl Baseline. We analyze the refresh-LSA traffic to base-line
the protocol dynamics, arising from soft state refresh.
Specifically, we predict the rate of refresh-LSA traffic from
information obtained from the router configuration files,
and then carry out a time-series analysis of finer time scale
characteristics.
ffl Analyze and fix anomalies. We take a closer look at the
change-LSA traffic, and identify root causes. In the operational
setting, the heavy-hitter root causes correspond to
failure modes. Identifying these failure modes at incipient
stages enables proactive maintenance.
ffl Analyze and fix protocol overheads. We take a closer
look at duplicate-LSA traffic, identify root causes, and
identify configuration changes for reducing the traffic.
To get a general sense for the nature of observed LSA
traffic, consider Figure 4. The Figure shows the number
of refresh, change and duplicate LSAs received per day, in
April, 2002, for four OSPF areas. The other OSPF areas
monitored exhibited similar patterns of behavior.
First, note that refresh-LSA traffic is roughly constant
throughout the month for all areas. (The small dip in
the refresh traffic on April 7 is a statistical artifact due to
rolling the clocks forward by one hour during the switch
Number
of
LSAs
Day
Number of LSAs per day: April, 2002
Refresh LSAs
Change LSAs
Duplicate LSAs2000600010000
Number
of
LSAs
Day
Number of LSAs per day: April, 2002
Refresh LSAs
Change LSAs
Duplicate LSAs
(a) Area 0 (b) Area 210003000500070009000
Number
of
LSAs
Day
Number of LSAs per day: April, 2002
Refresh LSAs
Change LSAs
Number
of
LSAs
Day
Number of LSAs per day: April, 2002
Refresh LSAs
Change LSAs
Duplicate LSAs
(c) Area 3 (d) Area 4
Fig. 4. Number of refresh, change and duplicate LSAs received at LSAR during each day in April.
to daylight savings time.) Second, all four areas show
differences in change and duplicate-LSA traffic. In the
backbone area (area refresh-LSA traffic is about two
orders of magnitude greater than change and duplicate-
LSA traffic. Non-backbone areas have very similar physical
topologies, but show markedly different change and
duplicate-LSA traffic. In area 2, change-LSA traffic is
significant, though duplicate-LSA traffic is negligible. In
area 3, we note significant duplicate-LSA traffic, and negligible
traffic. Finally, area 4 saw negligible
traffic for both change and duplicate LSAs. The reasons
for these variations in LSA traffic patterns will become apparent
in sections VI and VII.
V. REFRESH-LSA TRAFFIC
A. Predicting Refresh-LSA Traffic
First, let us consider how to determine the average rate
NR of refresh-LSAs received at a given router R. For the
purposes of the calculation, we assume that the set LR of
unique LSA-identifiers in router R's link-state database
is constant. That is, network elements are not being introduced
or withdrawn. We will use the term LSA interchangeably
with LSA-identifier.
Let F l denote the average rate of refreshes for a given
LSA l in the link-state database. Then,
l2LR
F l (1)
Let D denote the set of LSAs originated by all routers in
the OSPF domain, and S l the set of routers that receive a
given LSA l. Then, the set LR can be expressed as
which together with Eq. 1 determines NR . Thus, we see
that estimating the refresh-LSA traffic at a router requires
determining three parameters:
ffl D; the set of LSAs originated by all the routers in the
OSPF domain.
ffl For each LSA l in D, S l ; the set of routers that can receive
l.
ffl For each LSA l in D, the associated refresh-rate F l of l.
We next describe how to estimate these three parameters
from the configuration files of routers.
Number
of
LSAs
Day
Number of LSAs per day: April, 2002
Refresh-LSAs (actual)
Number
of
LSAs
Day
Number of LSAs per day: April, 2002
Refresh-LSAs (actual)
Refresh-LSAs (expected:config)
(a) Area 2 (b) Area 3
Fig. 5. Expected refresh-LSA traffic versus actual refresh-LSA traffic for two OSPF areas.
A.1 Parameter Determination
To determine D, it is possible to use information available
in router configuration files. In particular, it is not
hard to count the exact number of internal LSAs using configuration
files. For example, a router configuration file
specifies the OSPF area associated with each interface of
the router. We can derive the number of router LSAs a
given router originates by counting the number of unique
areas associated with the router's interfaces. On the other
hand, it is impossible to estimate the exact number of external
LSAs from configuration files. In general, the number
of external LSAs depend on which prefixes are dynamically
injected into OSPF domain. However, one can use
heuristics to determine external LSAs using the filtering
clauses in configuration files that control external route injection
Calculating the parameter S l for LSA l is equivalent to
counting the routers in the flooding scope of l. The count
can be easily determined by constructing the OSPF topology
and area structure from the configuration files.
To estimate the refresh-rate F l of LSA l, a crude option
is to use the recommended value of 30 minutes from the
OSPF specification [12]. In practice, better estimates can
be obtained by combining configuration information with
published information on the router vendor's refresh algorithm
We determined all three parameters from the network's
router configuration files using an automated router configuration
analysis tool, NetDB [13]. Specifically, we computed
the set D for router, network, summary and external
LSAs. We estimated external LSAs using the heuristic that
every external prefix explicitly permitted via configuration
is in fact injected as an external LSA. As it turned out, this
heuristic underestimated the number of external LSAs by
about 10%, owing to the injection of more-specific prefixes
than those present in filters within the configuration files.
For refresh-rates, the tool first determined operating system
version of each router from the configuration files. It
then consulted a table of refresh rates using the operating
system version as the index. The table itself was populated
from information published on the vendor web-site
[14] [15].
Figure
5 shows the expected refresh-LSA traffic per day
versus the actual number of LSAs received by LSAR, for
two areas. Clearly, the actual refresh-LSA traffic is as predicted
B. Time-series Analysis
In this section, we report on a time-series analysis of
refresh-LSA traffic. The analysis revealed that the traffic
is periodic, as expected. Recently, a paper by Basu
and Riecke [3] suggested that LSA refreshes from different
routers could become synchronized. We tested the hypotheses
and found refresh traffic not to be synchronized
across different routers.
B.1 Periodicity of Refresh-LSA Traffic
The time-series analysis revealed that the routers fall
into two classes:
ffl The first class has a refresh-period of minutes and
exhibits very strong periodic behavior.
ffl The second class has a refresh-period of about 33 min-
utes, with a jittered refresh pattern.
As it turned out, the analysis picked up differences in
the refresh algorithms, associated with different releases
of the router operating system. Specifically, the first class
of routers ran IOS 11 (11.1 and 11.2) whereas the second
class of routers ran IOS 12 (12.0, 12.1 and 12.2). The
OSPF implementation in IOS 11 follows a simple refresh
freqency (cycles per hour)
LSAs
per
minute
Fig. 6. Refresh-LSA traffic for routers running IOS 11. The upper
graph shows the time-series for a few hours of a typical
day. The lower graph shows the power-spectrum analysis of
the time-series.
strategy. The router scans the OSPF database every
minutes and refreshes all its LSAs by reflooding them in
the network [15]. Figure 6 shows an example from the
time-series obtained by binning the LSAs into sampling
intervals of size 1 minute (the horizontal line in the upper
graph shows the average LSA rate based on
bins). It seems obvious from the graph that there is a periodicity
in the time-series. To test this, and determine the
period we plot the power spectrum of the time-series in the
lower graph of the figure (based on a longer 1 week sample
of data). The power spectrum shows a distinct peak at
a frequency of 2 cycles per hour (a period of
The subsequent peaks are the harmonics of this distinct
peak, and so we can conclude that the time-series shows
strong periodicity.
The refresh algorithm underwent a change when IOS
11.3 was introduced [15]. The router running IOS 12 has a
timer which expires every refresh-int seconds. Upon expiry
of the timer, the router refreshes only those LSAs
whose last refresh-time is more than
refresh-int is configurable with a default of 4 min-
utes. Furthermore, the timer is jittered. Since the routers
of the enterprise network use the default, we expect the
refresh interval to be about 32 minutes (the smallest multiple
of 4 which is greater than minutes). The effect
can be seen in the upper graph of Figure 7 which shows
the LSA refresh pattern for routers running IOS 12. The
power spectrum in the lower graph of the figure shows that
the data has a strong component at 1.79 cycles per hour,
which is roughly 33 minutes as expected. (We have correspondingly
chosen the bin size for this data to be 67.189
seconds to minimize aliasing in the results.) Notice that
freqency (cycles per hour)
LSAs
per
minute
Fig. 7. Refresh-LSA traffic for routers running IOS 12. The upper
graph shows the time-series for a few hours of a typical
day. The lower graph shows the power-spectrum analysis of
the time-series.
there is considerably more noise in the spectrum due to
the jitter algorithm.
B.2 Synchronization of Refresh-LSA Traffic
It has been suggested that LSA refreshes are likely to be
synchronized with the undesirable consequence that they
are all sent nearly simultaneously creating a burst of LSA
traffic at a router [3]. We analyze refresh-LSA traffic to see
how bursty the traffic appears. In general, the burstiness of
LSA traffic received by a router depends on two things:
ffl The burstiness of refresh-LSA traffic originated by a single
router.
ffl Synchronization between refresh-LSAs originated by
different routers.
We have observed that LSAs originated by a single
router are usually clumped together during refresh. With
IOS 11 this is expected since a router refreshes all LSAs
on expiry of a single timer. Even with IOS 12, we have observed
that LSAs originated by a single router are clumped
together. Specifically, summary and external LSAs originated
by some routers tend to be refreshed in big bursts.
This explains the periodic spikes seen in Figures 6 and 7.
Next, we consider how refresh-LSAs coming from different
routers interact. A recent paper [3] suggested that
LSA refreshes from different routers are likely to be syn-
chronized. The mechanism that creates this synchronization
is related to the startup of the routers. However, in
general, in network related phenomena, synchronization is
only a real problem when there are forces driving the system
toward synchronization, which is not the case here.
For example, see [16] [17] where synchronization occurs
as a result of the dynamics of the system pushing it towards
time (in period=1800 seconds)
number
of
routers
(a)
time (in periods=2015 seconds)
number
of
routers
(b)
(a) IOS 11 (b) IOS 12
Fig. 8. Number of routers whose LSAs were received within one second intervals at LSAR during one refresh cycles. The routers
belong to area 8.
synchronization in a similar manner to the Huygens' clock
synchronization problem [18].
To understand why synchronization is only a real problem
if the system is pushed towards it, consider that in
a real network it would be very rare for all the routers
in a network to be rebooted simultaneously. Over time
though, individual links and routers are added, dropped,
and restarted. Each time the topology is changed in this
way, a little part of the synchronization is broken. The
larger the network, the more often topology changes will
occur, and so the synchronization is broken more quickly
in the cases where it might cause problems. Furthermore,
there is always a small drift in any periodic signal, and
this drift breaks the synchronization over time. Moreover,
there is no "weak coupling" [16] in OSPF LSA refresh
process, i.e., the LSAs generation at a router is not driven
by that at other routers. Finally, the addition of jitter in
IOS 12 onwards quickly removes any synchronization between
these routers. If there is no force driving the system
towards synchronization, then it is unlikely to be seen outside
of simulations.
For the enterprise network, we have observed that refresh
traffic from different routers is not strongly synchronized
Figure
8(a) and (b) show the number of routers
(from area 8) whose LSAs were received at LSAR during
a one second interval for the duration of a typical refresh-
cycle. Neither graphs display evidence of strong synchronization
between routers. We have also performed stastical
tests which show that at least at time scales below a minute
the LSA traffic from different routers is not at all synchro-
nized, and appears to be uniformly distributed over the
minute refresh period. On larger time scales, there is some
apparent weak correlation (see the clustering of routers at
and 0.75 in
Figure
8(a)), but the degree of correlation
seen should not have practical importance even if it is not
a statistical anomaly.
Area 8 was chosen because it contained a good mix of
routers with IOS 11 and 12. Other areas show similar characteristics
VI. CHANGE-LSA TRAFFIC
Figure
4 shows that some areas receive significant
change-LSA traffic. In this section, we first classify
these LSAs by whether they indicate internal or external
changes. Then, we look at the underlying causes.
Internal changes are conveyed by router and network
LSAs within the area in which change occurs and by summary
LSAs outside the area. External changes are conveyed
by external LSAs. Figure 9 shows the number of
change LSAs for the month of April. The figure provides
curves for selected areas, accounting for more than 99% of
the corresponding LSA traffic in April.
Figure
9 shows that external changes constitute the
largest component of change-LSAs generated in the net-
work. External changes from area 2 dominate those seen
in other areas (Figure 9(c)). Among internal changes, most
occurred in area 0 (Figure 9(a)). Internal change-LSAs in
area 0 were not propagated to other areas, since the net-work
was configured to allow only summary LSAs representing
default route (0.0.0.0/0) into non-backbone areas.
The spike in Figure 9(b) is due to a border router withdrawing
and re-announcing summary LSAs.
A. Root Cause Analysis
We saw that area 0 accounted for most of the internal
changes seen in April. It turns out that almost all these
changes were due to an internal error in a crucial router in
area 0. This router was the DR on all the LANs of area 0.
Because of the error, there would be episodes lasting a
few minutes during which the problematic router would
drop and re-establish adjacencies with other routers on the
LAN. Accordingly, a flurry of change-LSAs were gener-
fchange
LSAs
Day
Number of change LSAs (Router
Area 5
Number
of
change
LSAs
Day
Number of change LSAs (Summary) per day: April, 2002
Number
of
change
LSAs
Day
Number of change LSAs (External) per day: April, 2002
Area 6
(a) Router network LSAs (b) Summary LSAs (c) External LSAs
Fig. 9. Change-LSA traffic for each day in April. Each graph shows those areas which together account for more than 99% of
change-LSA in April. For example, areas 0, 1, 2 and 6 account for 99% external
Number
of
LSAs
Day
Number of LSAs per day: April, 2002
Router network LSAs in area 0
Router network change LSAs in area 0
Change LSAs due to problematic router2060100140
Number
of
LSAs
Hour
Number of LSAs per hour: April 16, 2002
Router network LSAs in area 0
Router network change LSAs in area 0
Change LSAs due to problematic router
(a) Entire April (b) April 16.
Fig. 10. Effect of a problematic router on the number of router network LSAs in area 0.
ated during each such episode. Each episode lasted only
for a few minutes and there were only a few episodes each
day. The data suggests that during the episodes the net-work
was at risk of partitioning or was in fact partitioned.
In April, these episodes account for more than 99% of total
internal change-LSAs observed in area 0. Figure 10(a)
shows the number of router and network LSAs for each
day of April, and Figure 10(b) shows the same statistic for
each hour of one of these days when area 0 witnessed a
few episodes. On April 19th, acting on the data gathered
by the OSPF monitor, the operator changed the configuration
of the problematic router to prevent it from becoming
the DR, and rebooted it. As a result, the network stabi-
lized, and changes in the area 0 topology vanished. Inter-
estingly, this illustrates the potential of OSPF monitoring
for localizing failure modes, and proactively fixing the net-work
before more serious failures occur.
Figure
9(c) shows that among all areas, area 2 witnessed
the maximum number of external changes in April. A
large percentage of these changes were caused by a flapping
external link. One of the routers (call it A) in area 2
maintains a link to a customer premise router (call it B)
over which it runs EIGRP, as mentioned in Section III-A.
Router A imports 4 EIGRP routes into OSPF as 4 external
LSAs. Closer inspection of network conditions revealed
that the EIGRP session between A and B started flapping
when the link between A and B became overloaded. This
leads to router A repeatedly announcing and withdrawing
EIGRP prefixes via external LSAs. The flapping of the
link between A and B happened nearly every day in April
between 9 pm and 3 am. These link flaps accounted for
about 82% of the total external change-LSAs and 99% of
the total external change-LSAs witnessed by area 2. At
the time of writing of this paper, the network operator is
still looking into ways of minimizing the impact of these
external EIGRP flaps without impacting customer's connectivity
or performance.
VII. DUPLICATE-LSA TRAFFIC
In Section IV, we remarked that area 3 received significant
duplicate-LSA traffic (almost 33% of the total LSA
traffic in that area). On the other hand, area 2 saw negligible
duplicate-LSA traffic. Since processing duplicate-
LSAs wastes CPU resources, it is important to under-
fLSAs
Day
Number of LSAs per day: April, 2002
Total LSAs in area 2
Total change LSAs in area 2
Total change LSAs due to flappers20060010004 8 12
Number
of
LSAs
Hour
Number of LSAs per hour: April 11, 2002
Total LSAs in area 2
Total change LSAs in area 2
Total change LSAs due to flappers
(a) Entire April (b) April 11.
Fig. 11. Effect of a flapping external link on area 2 external LSAs.
stand the circumstances that lead to duplicate-LSA traffic
in some areas and not others. As we will see, a detailed
analysis of the OSPF control plane connectivity explains
the variation in duplicate LSA traffic seen in areas 2 and 3,
and leads to a configuration change that would reduce duplicate
LSA traffic in area 3. In general, we believe such
analysis can provide operational guidelines for lowering
the level of duplicate LSA traffic, at the cost of small trade-offs
in network responsiveness.
A. Causes of Duplicate-LSA Traffic
In the enterprise network under study, all areas have
identical physical connectivity. Thus, it initially came as a
surprise that one area saw significant duplicate-LSA traffic
and another area did not. As it turns out, though all areas
have identical physical structure, the difference in how
LSAs propagate through the areas gives rise to the differences
observed in duplicate-LSA traffic. Recall that the
areas are LAN-based, and that the DR and BDR behave
differently than other routers on the LAN, as described
in Section II-B: The DR and BDR send LSAs to all the
routers (and the monitoring system's LSAR) on the LAN,
whereas other routers send LSAs only to the DR and BDR.
Thus, the LSA propagation behavior on a LAN depends
strongly on which routers play the role of the DR or BDR,
and how these routers are connected to the rest of the net-work
The analysis is rather intricate. Recall that every area
has two LANs, and that the LSAR is attached to one of
the LANs. Let us denote the LAN on which the LSAR
resides as LAN 1, and the other LAN as LAN 2. Recall
that B1 and B2 are connected to both LANs; other routers
are connected to only one of the LANs. We denote B1
and B2 as the B-pair, and rest of the routers as LAN1-
router or LAN2-router, based on which LAN the routers
reside on. Since the B-pair routers are connected to both
LANs, the role they play on LAN 1 (DR, BDR or regu-
lar) is very important in determining whether the LSAR
receives duplicate-LSAs or not. Indeed, it is the B-pair
routers' difference in role in areas 2 and 3 that gives rise to
different duplicate-LSA traffic in these two areas.
We arrive at four cases based on the roles B-pair routers
play on LAN 1:
Case 1: fDR, BDRg
Case 2: fDR, regularg
Case 3: fBDR, regularg
Case 4: fregular, regularg
To understand which of these cases leads to duplicate-LSA
traffic on LAN 1 of a given area, we model LSA propagation
on LAN 1 with a "control-plane" diagram in Figure
12. This diagram shows links between those routers
that can send LSAs to each other. In addition, the figure
shows how one or more copies of LSA L may propagate to
the LSAR via the B-pair routers. Suppose LSA L is originated
by a LAN2-router. The B-pair routers receive copies
of L on their LAN 2 interfaces and further propagate the
LSA to the LSAR on LAN 1. We denote the copies of L
propagated via B1 and B2 as L1 and L2 respectively in
Figure
12. The figure makes it clear that cases 1 and 3 lead
to duplicate-LSAs whereas cases 2 and 4 do not.
Table
II shows the cases we encounter in different areas.
Note that area 3 encounters case 3 whereas area 2 encounters
case 2. This explains why area 3 receives duplicate-
LSA traffic and area 2 does not.
Note that under cases 1 and 3, whether the LSAR actually
receives multiple copies of LSA L depends on the
LSA arrival times at various routers. For example, consider
case 1. Whether the B-pair routers send LSA L to
the LSAR or not depends on the order in which the LSA
arrives at these two routers. At least one of B-pair routers
(a) Case 1
(b) Case 2
L1 or L2
(d) Case 4
(c) Case 3
LSAR
LSAR
LSAR
LSAR
Fig. 12. Control-plane diagram for LAN 1 under different roles played by the B-pair routers. The figure also shows how different
copies of LSA L can arrive at the LSAR via the B-pair routers. L1 and L2 are copies of LSA L.
Area DR on LAN 1 BDR on LAN 1 Case above
Area 4 B2 LAN 1 rtr case 2
Area 5 B2 B1 case 1
Area 6 B2 B1 case 1
Area 8 LAN 1 rtr B2 case 3
II
DR AND BDR ON LAN 1 OF VARIOUS AREAS.
must send the LSA to the LSAR on LAN 1. However,
whether the other router also sends the LSA to LSAR depends
on the order of LSA arrival at this router. If the
router receives the LSA on LAN 2 first, it sends the LSA
to LSAR resulting in a duplicate being seen at LSAR. On
the other hand, if the router receives the LSA on LAN 1
first, it does not send the LSA to LSAR. In this case, the
LSAR does not receive a duplicate-LSA. A similar argument
can be made regarding case 3.
To summarize, an LSA originated by a LAN2-router
may get duplicated on LAN 1 under cases 1 and 3, if it
arrives in a particular order at different routers. Figure 13
shows the number of duplicate-LSAs originated by various
routers for two representative areas. All the duplicate-
LSAs seen by the LSAR are originated by LAN2-routers
and the B-pair routers. The LSAR does not see duplicate-
LSAs for LSAs originated by a LAN1-router. This is
because irrespective of which router is DR and BDR on
LAN 1, LSAR receives a single copy of an LSA originated
by a LAN1-router from the DR.
Figure
14 shows the fraction of total LSAs originated
by LAN2-routers that are duplicated. As the figure indi-
cates, even under cases 1 and 3, not all the "duplicate-
susceptible" LSAs are actually duplicated. We observed
that within a given area, the percentage of LSAs originated
by LAN2-routers that become duplicated remains roughly
constant for days. However, this percentage varies widely
across areas. Understanding this behavior requires understanding
the finer time scale behavior of the routers in-
volved, and is ongoing work.
Number
of
duplicate
LSAs
Day
Number of duplicate LSAs per day: April, 2002
Total duplicate LSAs (LAN2-routers)
Total duplicate LSAs (B1)
Total duplicate LSAs (B2)
Total duplicate LSAs
Number
of
duplicate
LSAs
Day
Number of duplicate LSAs per day: April, 2002
Total duplicate LSAs (LAN2-routers)
Total duplicate LSAs (B1)
Total duplicate LSAs (B2)
Total duplicate LSAs (LAN1-routers)
(a) Area 3 (case
Fig. 13. Duplicate-LSA traffic from various routers.
B. Avoiding Duplicate-LSAs
Having uncovered the causes of duplicate-LSAs, we explore
ways to reduce their volume. The enterprise network
operator can avoid duplicate-LSAs if he can force case 2
or 4, by controlling which router becomes the DR and/or
the BDR on LAN 1. This depends on a complex election
algorithm executed by all routers on the LAN. The input to
this algorithm is priority parameter, configurable on each
interface of a router. The higher the priority, the greater
the chance of winning the election, though these priorities
provide only partial control. As a result, the operator cannot
force case 2 to apply. Even if the network operator
assigns highest priority to one of the B-pair and zero priority
to the other routers on LAN 1, there is no guarantee
that the high priority router will become DR. Fortunately,
the operator can force case 4 to apply by ensuring that neither
of the B-pair routers become DR or BDR on LAN 1.
This is accomplished by setting the priority of these two
routers to 0, so that they become ineligible to become DR
or BDR [12].
Whether forcing case 4 is sensible depends on at least
two factors. First, the DR and BDR play a very important
role on a LAN, and bear greater OSPF processing load
than the regular routers on the LAN. Therefore, the operator
has to ensure that the most suitable routers (taking
into account load and hardware capabilities) can become
DR and BDR. The second factor is more subtle. Typically,
reducing duplicate-LSAs requires reducing the number of
alternate paths that the LSAs take during reliable flood-
ing. This can increase the LSA propagation time, which in
turn can increase convergence time. With case
the LSAs originated on LAN 2 have to undergo an extra
hop before the other routers on LAN 1 receives them.
This means that the LSA propagation time may increase
Fraction
Day
Fraction of LAN 2 LSAs that get duplicated: April, 2002
area 3
area 6
area 8
Fig. 14. Fraction of LSAs originated by LAN2-routers that get
duplicated.
case 4 is forced.
VIII. CONCLUSIONS
In this paper, we provided a case study of OSPF behavior
in a large operational network. Specifically, we introduced
a methodology for OSPF traffic analysis, treating
LSA traffic generated by soft-state refresh, topology
change, and redundancies in reliable flooding, in turn.
We provided a general method to predict the rate of
refresh LSAs from router configuration information. We
found that measured refresh-LSA traffic rates matched predicted
rates. We also looked at finer time scale behavior of
refresh traffic. The refresh period of different routers was
in conformance with the expected behavior of their IOS
versions. Though LSAs originated by a single router tend
to come in bursts, we found no evidence of synchronization
across routers. This may reduce scalability concerns,
which would arise if refresh synchronization were present,
leading to spikes in CPU and bandwidth usage.
We found that LSAs indicating topology change were
mainly due to external changes. This is not unexpected
since the network imports customer reachability information
into OSPF domain which is prone to change as customers
are added, dropped or their connectivity is changed.
Moreover, since customers are connected over leased lines,
their reachability information is likely to be more volatile.
For both internal and external topology changes, persistent
but partial failure modes produced the vast majority
of change LSAs, associated with flapping links. Interest-
ingly, the internal change-LSA traffic pointed to an intermittently
failing router, leading to a preventative action to
protect the network. It is fair to say that any time a new
way to view networks is introduced (route monitoring in
this case), new phenomena are observed, leading to better
network visibility and control. Though further and wider
studies are needed, we suspect that persistent and partial
failure modes are typical, and the development of strategies
for stabilizing OSPF would benefit from focusing on
such modes. During the study, we did not observe any
instance of network-wide meltdown or network-wide in-
stability. We also investigated the nature of the duplicate-
LSA traffic seen in the network. The analysis led to a
simple configuration change that reduces duplicate traffic,
without impacting the physical structure of the network.
The findings of this case study are specific to the enterprise
network and the duration of the study. Similar studies
for other OSPF networks (enterprise and ISP) and studies
over longer durations are needed to further enhance understanding
of OSPF dynamics. This forms a part of our
future work. Furthermore, In the ISP setting, we intend
to join the OSPF and BGP monitoring data to analyze the
interactions of these protocols. Another direction of future
work is to develop realistic workload models for OSPF
emulation, test and simulations. Our methodology for predicting
refresh-LSA traffic is a first step in that direction.
The workload models can also be used in conjunction with
work on OSPF processing delays on a single router [5] to
investigate network scalability.
ACKNOWLEDGMENTS
We are grateful to Jennifer Rexford and Matt Grossglauser
for their comments on the paper. We thank Russ
Miller for his encouragement and guidance in the operational
deployment. Finally, we thank the anonymous reviewers
for their comments.
--R
Routing in the Internet
"Can One Rogue Switch Buckle AT&T's Net- work?,"
"Stability Issues in OSPF Rout- ing,"
"An OPSF Topology Server: Design and Evaluation,"
"Experience in Black-box OSPF Measurement,"
"Experimen- tal Study of Internet Stability and Wide-Area Network Failures,"
"Toward Milli-Second IGP Convergence,"
"ISIS Routing on the Qwest Backbone: a Recipe for Subsecond ISIS Convergence,"
"Internet Routing Stability,"
"Origins of Pathological Internet Routing Instability,"
"OSPF Version 2,"
"IP Network Configuration for Intra-domain Traffic Engineering,"
"Cisco Systems,"
"OSPF LSA Group Pacing,"
"The Synchronization of Periodic Routing Messages,"
"Oscillations and Chaos in a Flow Model of a Switching System,"
--TR
The synchronization of periodic routing messages
Internet routing instability
Routing in the Internet (2nd ed.)
Stability issues in OSPF routing
Experience in black-box OSPF measurement
OSPF
--CTR
Renata Teixeira , Aman Shaikh , Tim Griffin , Jennifer Rexford, Dynamics of hot-potato routing in IP networks, ACM SIGMETRICS Performance Evaluation Review, v.32 n.1, June 2004
Aman Shaikh , Albert Greenberg, OSPF monitoring: architecture, design and deployment experience, Proceedings of the 1st conference on Symposium on Networked Systems Design and Implementation, p.5-5, March 29-31, 2004, San Francisco, California
Aman Shaikh , Rohit Dube , Anujan Varma, Avoiding instability during graceful shutdown of multiple OSPF routers, IEEE/ACM Transactions on Networking (TON), v.14 n.3, p.532-542, June 2006
Yin Zhang , Matthew Roughan , Carsten Lund , David L. Donoho, Estimating point-to-point and point-to-multipoint traffic matrices: an information-theoretic approach, IEEE/ACM Transactions on Networking (TON), v.13 n.5, p.947-960, October 2005
Yin Zhang , Matthew Roughan , Carsten Lund , David Donoho, An information-theoretic approach to traffic matrix estimation, Proceedings of the conference on Applications, technologies, architectures, and protocols for computer communications, August 25-29, 2003, Karlsruhe, Germany
Ruoming Pang , Mark Allman , Mike Bennett , Jason Lee , Vern Paxson , Brian Tierney, A first look at modern enterprise traffic, Proceedings of the Internet Measurement Conference 2005 on Internet Measurement Conference, p.2-2, October 19-21, 2005, Berkeley, CA
Yin Zhang , Zihui Ge , Albert Greenberg , Matthew Roughan, Network anomography, Proceedings of the Internet Measurement Conference 2005 on Internet Measurement Conference, p.30-30, October 19-21, 2005, Berkeley, CA
Haining Wang , Cheng Jin , Kang G. Shin, Defense against spoofed IP traffic using hop-count filtering, IEEE/ACM Transactions on Networking (TON), v.15 n.1, p.40-53, February 2007 | routing;LSA traffic;OSPF;enterprise networks |
637247 | Active probing using packet quartets. | A significant proportion of link bandwidth measurement methods are based on IP's ability to control the number of hops a packet can traverse along a route via the time-to-live (TTL) field of the IP header. A new delay variation based path model is introduced and used to analyse the fundamental networking effects underlying these methods. Insight from the model allows new link estimation methods to be derived and analysed. A new method family based on packet quartets: a combination of two packet pairs each comprising a probe following a pacesetter packet, where the TTL of the pacesetter is limited and the end-to-end delay variation of the probes is measured, is introduced. The methods provide 'pathchar-like' rate estimates over multiple links without relying on the delivery of ICMP messages, with reduced invasiveness and other advantages. The methods are demonstrated using simulations, and measurements on two different network routes are used for illustration and comparison against available tools (pathchar and clink). A comprehensive analysis of practical issues affecting the accuracy of the methods, such as link layer headers, is provided. Particular attention is paid to the consequences of 'invisible' hops: nodes where the TTL is not decreased. | INTRODUCTION
Recent papers of Lai and Baker [1] and Dovrolis [2] identify
two primary network effects at the foundation of existing probing
techniques. The one-packet methods are based on the assumption
that the transmission delay is linear in the packet size.
The idea was first introduced by V. Jacobson in his pathchar
tool: to estimate link bandwidths from round trip delays of different
sized packets from successive routers along the path [3],
[1]. The packet pair family is based on the 'spacing' effect of
the bottleneck link [4], the fact that the minimum inter-departure
time of consecutive packets from a link is attained when they
are back-to-back. This effect can be utilised to estimate bottleneck
bandwidth (the smallest link rate) [5], [6], [7], [8], [1],
[2]. Methods estimating available bandwidth are also based on
the observation of the inter-departure time of consecutive probe
packets, taking into consideration the cases when cross traffic
packets are inserted between them. This possibility is discussed
in [6], [7], [9], [1], [2], [10].
In a recent paper [11], we enhanced the packet-pair family
through thoroughly investigating its dependence on probe size
and the effects of link layer headers. In this paper we contribute
to the development of hybrid methods exploiting both of
Research Center for Ultra Broadband Information Networks,
Dept. of Electrical and Electronic Engineering, The University of Melbourne.
Hungary R&D, fa.pasztor, [email protected]
The authors gratefully acknowledge the support of Ericsson.
the above effects. In doing so we make extensive use of the IP
header time-to-live (TTL) field as a means of accessing different
hops along the path. Pathchar was the first link bandwidth estimation
method published based on this idea. It used the TTL
field to induce reverse path ICMP messages, allowing round-trip
delay measurements (figure (1)). Downey suggested a number
of improvements to the filtering method used and implemented
them in his clink tool [3].
Forward path
Backward path
ICMP returned
UDP sent
RECEIVER
FW Hop #3
FW Hop #2
FW Hop #1 FW Hop #4
Expired
Fig. 1. IP time to live (TTL) expiration explained, the TTL of the UDP packet is
set to 2. In real networks the backward route can be different for each TTL.
In this paper we expand and generalise on these ideas in several
ways. First, we describe the advantages of moving from
an approach based on measuring delays, and minimum filter-
ing, to one based on delay variation, and peak detection in his-
tograms. We give a thorough description of the nature of the
noise encountered by probes as they traverse the route, and introduce
new methods exploiting the characteristics of this noise
(for more details see [12]). We also point out the crucial importance
for all one-packet methods of probes remaining in different
busy periods at each hop. This distinction is central to
the performance of probing methods, but has not been clearly
appreciated before (see however [11], [12]). When this condition
is satisfied, the noise typically takes on symmetric characteristics
which allows highly efficient median based filtering.
Using these ideas, we propose ACCSIG as a more efficient
pathchar-like method. The resulting discussion also prepares
the ground for the second and main part of the paper, where
we follow in the steps of Lai and Baker [1] in using TTL, but
without using the returning ICMP messages, with their associated
disadvantages. We introduce a flexible packet quartet (PQ)
probe class, where probes are replaced by a probe and pacesetter
pair, and in this framework define and analyse 5 new estimation
methods based on delay variation and peak detection. We determine
their key characteristics including their susceptability to
different kinds of error. Simulation based comparisons are performed
amongst the new methods, and real network measurements
on two different network routes are used to illustrate the
methods and compare them to pathchar and clink. A full
network based validation however is beyond the scope of the pa-
per, whose main aim is to introduce the packet quartet methods
and to derive their central properties. Finally, several important
Segment Local route International route
1. 100Mbps LAN 100Mbps LAN
2x100Mbps switched 2x100Mbps switched
2. Bottleneck 10Mbps LAN
10Mbps LAN 2x10Mbps switched
3. 100Mbps FDDI
4. 155Mbps ATM
5. 155Mbps ATM
(2x155Mbps ATM)
6. 22Mbps
7. Unknown
est. 100Mbps single hop
8. Unknown
est. 10Mbps single hop
9. Suspected Bottleneck
estimated 2x2Mbps
10. Unknown
estimated 22Mbps
11. Unknown
estimated >100Mbps
12. 100Mbps LAN
(only last link known)
I
IP-SEGMENT COMPOSITION OF THE TWO TEST ROUTES.
practical issues are discussed in detail, in particular the common
problem of invisible hops, where TTL cannot be decremented,
which leads to serious estimation errors and is one of the key
drawbacks of TTL based approaches. The behaviour of the proposed
and existing methods with respect to this problem is given
a thorough treatment. It is shown how it can be detected and
in some cases corrected by combining different packet quartet
methods.
We model a route as a chain of ordered hops,
H , where a hop is a FIFO queue with deterministic service rate
h followed by a transmission link of rate h and latency D h .
If a packet is to be dropped at hop h due to an expired TTL
field, then it does so at the instant it arrives there, without actually
entering the queue at hop h. Physically, this hop model is
consistent with store and forward routers where packet arrivals
and departures are counted from the end of the last bit: the input
queue is seen as part of the previous transmission link, and
only when the packet has fully arrived is it either dropped, or
transferred (in negligible time) to the output queue, which is the
queue modelled.
The network measurements in this paper were taken over
two one-way routes. One, in the laboratory, was only two IP-
hops (we call this a segment) long and completely known. The
other exited the country, was 12 segments long (as measured
by traceroute), and was only partially known, as detailed
in table I. A high accuracy sender, similar to that described
in [13], was used capable of sending arbitrary streams of UDP
probes. The raw experimental data is the arrival and departure
timestamps of the probes. These were collected with a modified
tcpdump using a modified software clock, described in
[14], with a skew of less than 0.1 parts per million. Software
measurement noise was measured to be significant in just a few
measurements per 10000.
II. PATHCHAR REVISITED
We begin with some generic definitions relating to packets
traversing a single hop. Although the equations hold true for
arbitrary packets, in this paper the index i refers to probe packets
only. Probe i arrives to the hop by appearing in the queue at the
time instant i . It begins service after a waiting time of w i 0,
completes it after a service time of x i > 0, and after a constant
propagation delay of D > 0, exits the hop at time
. The 1-hop
delay is therefore
and comparing two successive probes we have the 1-hop
inter-departure time: t
delay
in which D plays no role.
Pathchar and its variants are based on measuring the
round-trip delay between the release of TTL limited UDP probes
and the receipt of their corresponding ICMP error messages. If
we label the forward route hop after which the TTL expires by
h ttl , the backward route hop at which the ICMP error message
enters by k ttl , and let K be the last hop on the backward route,
then using equation (1) the delay can be expressed as:
h=k ttl
icmp
Since the service time of probe i on hop h can be expressed
as is the probe size, and as the size of ICMP
messages is fixed, we see that there exists a deterministic linear
relationship between probe size and delay. This can be exploited
by varying the probe size, and filtering to eliminate the contribution
of the waiting time terms, which can be thought of as a
kind of noise.
In pathchar, after measuring this round-trip delay for
many probes, a filtering based on minima is applied to the delay
series, which aims to select probes for which the waiting times
were in fact zero. For such probes the delay can
be expressed as
where C represents the packet size independent remainder of
equation (6). This operation is performed for a number of different
probe sizes (default values: pathchar 45, clink 93)
to obtain samples of a function d(p) describing the minima as
a function of probe size p. Finally, a slope is fitted through the
estimated d(p) function values.
From equation (7) it is clear that the slope estimate corresponds
to
. The link rates h are then determined
by perfoming the procedure for all TTL values from 1 up to the
number of IP hops, and estimating the h from the increase in
the slope at each stage.
In the ACCSIG method described below, we change the above
methodology in one key respect, resulting in a significant improvement
in efficiency. The change is based on a novel approach
of observing and analysing a network route, using the
delay variation of probes, rather than the delay. We now introduce
this approach, which is also the basis of the packet quartet
methods described in section III.
A. A new delay variation based route description
Probing analysis has typically centered on the fd i g series, as
in pathchar-like methods, or on the time series of receiver
g, as in packet-pair type methods. For either
purpose, we claim that it is advantageous to focus on the delay
variation f- i g, for two reasons. First, as -
and the
are part of probe design and therefore known, f- i g includes
but allows richer structure through the choice of ft i g. Sec-
ond, up to a constant f- i g is equivalent to fd i g, but like ft
only requires accurate clock rates for its measurement, not end-
to-end clock synchronisation, an important practical advantage.
In fact,through its reliance on slope estimates, pathchar is
also based on differences of delay. Here we show some of the
advantages of moving to an explicit delay variation framework,
the development of which was motivated by repeated observations
in network measurements of very characteristic features in
delay variation histograms.
As the route delay is simply the sum of the individual hop
delays, the delay variation over an H-hop route can be written
using equation (5) as:
(w h
Consider each of the terms in the above equation. The first expresses
the contribution of the probe service times. It is a deterministic
effect independent of cross traffic which accumulates
over hops where adjacent probes have unequal service times.
The second term is due to cross traffic. It can be thought of a
kind of random noise, however the nature of that noise depends
critically upon the interaction between the cross traffic and the
probe stream.
During network experiments we observed that probe streams
with probe separations in the range of 100s of ms and above,
produce characteristic symmetric delay variation histograms.
This can be be understood in terms of the nature of the noise
in such a case, as follows.
Recall that the busy periods for a queue are the time intervals
where the server is continuously active, which separate the
idle periods when the queue is empty. If, for each hop, each
probe arrives in a different busy period, which can be achieved
with high probability by sending them well separated, then each
waiting term w h
i comes from a different busy period, and contains
no explicit probe component. Provided the cross traffic
does not change systematically over the measurement interval,
it is then reasonable to regard these waiting times as an approximately
IID sequence across probes, and hence
(w h
as a symmetric, near white noise at each hop. This does not
mean that a strong assumption of low correlations between hops
is being made. Indeed, only if a queue is unstable will we not
be able to ensure that probes arrive to it in different busy periods
for sufficiently large separation.
In figure (2a) the delay variation histogram is given from a
simulation with stationary cross traffic, where the fixed sized
probes do in fact arrive to the hop in different busy periods. The
service time term cancels, leaving only the waiting time terms,
and we observe, as argued above, that they behave as a symmetric
noise. When however consecutive probes arrive during the
same busy period the nature of the noise is substantially differ-
ent. We briefly consider this case in section II-D.3.
We now describe the accumulation signature, the - i based
observable which serves as the basis of our new estimation
methods. With constant probe size (assuming x h
accumulation term (p i
1= h vanishes, corresponding
to a peak at - in the histogram, which the noise spreads
out. If instead the probe sizes are made to alternate (the alternation
is essential, other orderings would produce different distri-
butions), the peak splits into two, and the distance between the
twin peaks is proportional to the quantity
1= h , and allows
it to be measured. In figure (2b) it is seen in the context of a
real measurement how these symmetric peaks combine with the
symmetric noise to produce a symmetric bimodal distribution -
the signature.
d [sec]
(a)
d [sec]
(b)
Fig. 2. Accumulation signature in delay variation histograms. (a) 1 hop simula-
tion. Constant probe size: the service time cancels, the noise is symmetric,
(b) 12 hop measurement. Alternating probe size: the peak splits and the
signature emerges: a symmetric bimodal histogram.
To reduce measurement error it is desirable to increase the
signature, that is to increase the distance between the peaks.
This suggests constructing a probe stream with sizes which alternate
between the minimum and maximum possible values,
rather then using a large number of different packet sizes as in
pathchar-like methods. A second order effect, the fact that
the efficiency of peak detection may depend on packet size, will
be discussed below.
The delay variation framework is inherently robust in that
the key assumption of probes arriving in different busy periods
can be verified by checking for the symmetry of the his-
togram. Although non-stationarity in the cross traffic will distort
the distribution, in practice it will remain approximately symmetric
unless a systematic change occurs over the course of the
measurement. A level shift for example in one of the cross traffic
streams, results in the received delay variation histogram being
the sum of two symmetric components, together with a non-symmetric
component consisting of just a single - i value.
B. ACCSIG - A new link bandwidth estimation method
In this section we apply the accumulation signature idea to the
TTL scenario of pathchar. Hence, as above, we assume that
well spaced probes are dropped after hop h ttl and that ICMP
messages are generated and sent back to the sender along some
backward path. Formally, we can consider these ICMP packets
as the original probes which have undergone a size change after
hop h ttl . One can then use equation (8) to write the round-trip
(sender-to-sender) delay variation as
h=k ttl
(w h
h=k ttl
(w h
where as before K is the number of hops on the backward route,
and k ttl is the hop where the ICMP packet enters the backward
route. As the ICMP packets share a common size, independently
of the size of the probe which triggered their generation, the
second component of the above equation becomes zero, leaving
an accumulation term generated by the alternating initial probe
sizes up to hop h ttl only, but noise terms corresponding to both
the forward and backward paths. By the same argument as in
section II-A, these noises should be symmetric, resulting in a
bimodal, symmetric delay variation histogram. The resulting
delay variation analogue of equation (7) is
where N i is the symmetric noise term. Here we are assuming
that the time to generate the ICMP packet is independent of the
probe size.
The estimation of link bandwidths is performed recursively,
similarly to pathchar and clink. A group of probes with a
fixed limited TTL is sent to measure
and the procedure
is repeated for steadily increasing TTL until the receiver
is reached. The h are determined from the difference in the
estimates at consecutive steps.
To complete our description of ACCSIG the question of peak
detection in the - i histogram must be addressed. A number of
different methods could be applied to perform this task, including
kernel density estimation as suggested in [8], or using conditional
sample means or quantiles. Both here and throughout
this paper we used the following median related method which
we found to be more accurate than kernel estimation, and which
avoids the outlier sensitivity of moments.
As the probe sizes alternate, the (p i
has two possible values. Assume that p 1 < p 2 . Two subsets can
be defined depending on the parity of i:
Each of these is symmetric in its own right and corresponds
roughly to the parts of the histogram to either side of the origin.
The peaks are detected by taking the medians of these subsets.
The absolute values + and of the peak locations can also
Relative
error
upper
estimate
lower
Pathchar Clink Accsig Accsig* PQ1 PQ2 PQ3 PT1
Measurement method
Relative
error
upper
estimate
lower
(b) 22 Mbps
Fig. 3. Relative error of different methods over 1-hop segments. (a) Last (2 nd )
segment (10Mbps) of local route, (b) 6 th segment (22Mbps) of international
route.
be used to provide an informal interval estimate, in a similar
way to clink [3]. This is defined by the rate estimates corresponding
to the closest of the two over ttl 1 and ttl, and the
furthest.
The key advantage of the above approach is that, unlike
methods such as pathchar and variants which attempt to find
the minimum delay and in so doing discard the majority of the
delay values, the accumulation signature utilises the information
from all the probes that are received. This greatly reduces the
number of probes that need to be sent, especially in the case of
longer routes and higher link utilisations, as minimum detection
filtering requires the probes to arrive to empty queues at each
hop along the route.
C. Comparison with pathchar and clink
A full comparison of ACCSIG against pathchar and
clink is beyond the scope of this paper, as it would involve
covering many different route and traffic scenarios, as well as
tackling the issues of how to conduct a truly fair comparison
and how to measure accuracy appropriately. Instead we offer
a simple network based illustration of their comparative performance
for lightly loaded routes of short and medium length (be-
tween midnight and 6 a.m in both the the sender and receiver
timezones). Under these conditions we expect each method to
produce similar results. It is well known that under heavy traffic
conditions minimum delay detection is problematic and estimates
from pathchar and its variants typically suffer significant
errors [3], [1]. ACCSIG will perform better under these
circumstances due to the symmetry of the independence signa-
ture, which allows the detection of the delay variation peak positions
even if no probes traverse the path with minimum delay.
In effect, the nature of the noise is used to advantage.
The measurements were performed over a two and a twelve
segment route. The short route was on the local LAN and was
fully known, whereas the other was international (between the
University of Melbourne and the WAND group at the University
of Waikato in New Zealand) and only partially known. As be-
fore, by segment we mean a hop at the IP level, many of which
consist of multiple link layer hops, as indicated in table I.
As a rough measure of fairness we selected the number of
probes transmitted in the ACCSIG probe stream to be equal to
the number used by pathchar, which is roughly twice that
of clink under default settings. In addition, we produced
ACCSIG estimates with only the first quarter of the - i values
used (marked by Accsig ), which corresponds to half the number
used by clink. Figure 3 shows the relative errors of the
link rate estimates produced by different methods on two selected
1-hop segments (the other methods will be introduced in
section III). Results over other 1-hop segments with low rate
are similar to those of figure 3b, the difference in estimates between
the three methods being usually within 10%. ACCSIG
performs better on the last hop of a route, such as in figure 3a,
as it does not rely on the generation the ICMP 'UDP port un-
reachable' packet, which incurs an additional latency, but uses
receiver timestamps instead. This however is not intrinsic to
ACCSIG, but a function of the current implementation.
-50%+50%
Bw.
Estimates
upper
estimate
lower
-50%+50%
Bw.
Estimates
upper
estimate
lower
-50%+50%
Number of probe pairs
Bw.
Estimates
upper
estimate
lower
segment #2
segment #6
segment #9
Fig. 4. Efficiency of ACCSIG. Link rate estimates as a function of the number of
probe pairs in the international route. (a) Segment #2 (2-hops), (b) Segment
#6 (1-hop) and (c) (suspected bottleneck), Segment #9 (suspected 2-hops).
The expected benefit of ACCSIG is its more efficient use of
the probes sent. To quantity this further, figure 4 plots a sequence
of estimates based on truncating the received delay variation
series from the international route to emulate shorter probe
streams. We see that the 'time' to convergence to the long term
estimates varies from as little as 10 probes, up to 200, depending
on the distance of the segment from the sender, and its link rate.
Larger distances increase the accumulated cross traffic noise, resulting
in slower convergence (compare charts 4a and 4b). How-
ever, segment #9 is further away yet convergence is faster than
at segment #6. This is due to the fact that the delay variation
increment induced by a hop is inversely proportional to its rate,
and segment #9 is the (suspected) bottleneck.
Because the assumption of probes arriving in different busy
periods is fundamental for the ACCSIG method, it is not possible
to obtain estimates more quickly by sending probes closely
spaced. It should be noted however that this assumption is of key
importance for pathchar and its variants as well: reducing the
time of the measurement while keeping the number of probes
constant has a negative impact on the accuracy of all one-packet
based methods. In the case of minimum filtering the reason for
this is particularly clear. Probes which closely follow each other
Relative
error
uncorrected est.
upper
estimate
lower
Relative
error
uncorrected est.
upper
estimate
lower
Pathchar Clink Accsig Accsig* PQ1 PQ2 PQ3 PT1
Measurement methods
Relative
error
uncorrected est.
upper
estimate
lower
(a) 100 Mbps
(c) 1.94 Mbps
Correction for invisible hops
Correction for invisible hops
Correction for invisible hops
Fig. 5. Relative error over multiple hop segments. (a) 1 st segment of local
route (2x100Mbps LAN), (b) 2 nd segment of international route (2x10Mbps
LAN), (c) 9 th segment of international route, measured as the bottleneck
(2x2Mbps).
will be likely to share busy periods with other probes in at least
one router in the route, and therefore will not be traversing the
system with minimal delay as required.
In figure 5, three further examples from the same experiments
are given for segments with multiple hops. Once again
pathchar, clink and ACCSIG give very similar estimates,
however they are half of the true link rate! One of the main contributions
of this paper is to point out that this is due to invisible
hops. We describe this problem in more detail in the next section
( II-D.1), and contribute to its resolution in section III.
D. Practical Issues
The measurement results reveal that all three methods sometimes
suffer significant errors even in the case of relatively low
bandwidth segments. This section describes a number of factors
which can significantly effect the accuracy of estimates.
Some are generic and shared by all three methods and even more
widely, others hold only for pathchar and clink.
D.1 Invisible hops
Unfortunately, in real networks not all switching elements test
and decrease the TTL value in the IP header, and therefore be-
come, for practical purposes, invisible to IP-TTL based probing
[15]. For example Ethernet and ATM switches, although store
and forward switching elements, are below the IP layer and do
not touch IP's TTL, as illustrated on figure (6). Some IP routers
are even configured to not decrease TTL.
As a result of this, TTL based measurement methods sometimes
give estimates corresponding to TTL 'segments', that is
sequences of hops between which the TTL does not change,
rather than individual hops. For example the bandwidth estimate
IP;1 from the first stage of clink, pathchar and ACC-
SIG for the route shown on figure (6) (which corresponds to the
Link level ("real")
path representation
path representation
IP,1 IP,2
level
SENDER RECEIVER
LAN Switch
Hop #3
Hop #2
Hop #1
IP Router
Packets with IP-TTL=1
dropped here
Fig. 6. Invisible hops: the lower view of the route shows all hops, the upper
only those that test and decrease IP's TTL.
three-hop two-segment local test route), will be: IP;1 =X
which, using ttl 1 to index segments, generalises to
where h ttl is the last hop before the pacesetter is dropped, and
by convention h
This effect is very likely to be responsible for the errors in
the estimates of figure (5). The original estimates are marked as
uncorrected, and show significant estimation errors. However
using our knowledge about the presence of invisible hops and
their effects, as detailed for example in equation (13) for ACC-
SIG, the estimates can be corrected. Unfortunately this correction
is impossible if the presence of invisible hops is unknown.
Packet quartet based methods, introduced in section III, provide
an efficient means of detection.
D.2 High bandwidth links
The estimates for high bandwidth links unfortunately are too
unreliable to support meaningful comparison. To understand
why this is the case for the delay variation and peak detection
underlying ACCSIG, consider that with probes of 1500 and 40
bytes, a 12sec error in the estimate of the increase in inter-
peak distance for two consecutive ttl values, corresponding to a
6sec error for a single peak, produces a 10% error in the estimate
of a 100Mbps link. The accuracy of the peak detection is
mainly a function of the number of probes used and the amount
of cross traffic noise entering via the waiting time components
of equation (9), which increase with ttl. Note that pathchar
and clink estimates are subject to time-stamping errors, ranging
from 10s of sec to 100s of ms, which can lead to similar
estimation errors.
D.3 Invasiveness
Pathchar and clink determine the sending times of
probes, by default, based on the round-trip times. This, as noted
in [8], can result in loads as high as 60% on a 10 Mbps LAN
with a round-trip delay of 1ms.
Especially on highly utilised network routes, this approach
can lead to probes joining the queues in the same busy period as
the previous probe. As such a probe by definition won't experience
minimum delay, the minimum filtering step of Pathchar
and clink either filters out these probes, in which case the
methods do not gather any information form these probes, or
they will lead to erroneous minimum delay estimates leading to
link rate estimation errors.
In the case of ACCSIG, by recursively applying Lindley's
equation (16), and using equation (8), it can be shown that the
delay variation for such a probe is
(w h
where s(i) is the last hop along the route where probe i shared a
busy period with probe i 1, and c s(i)
i is the aggregated service
time of the cross traffic entering that queue between probes
and i.
By comparing equations (8) and (14) it is clear that the accumulation
signature before hop s(i) is destroyed by this new
effect, so that if a significant number of probes experience it,
an erroneous link bandwidth estimate will result. This condition
can be detected however, as the definition of the peaks and
the symmetry of the histogram is lost, and estimates become er-
ratic. Our suggestion in this case it to send probes with increased
spacing. This is less invasive, and probably much more efficient,
than the traditional wisdom of simply transmitting more probes.
To limit invasiveness, in all the measurements in this paper the
probe streams (for ACCSIG and for the PQ methods) were dimensioned
to remain below 64kbps on average.
Increasing the spacing of probes will naturally decrease the
probability of them sharing busy periods, but, if the number of
probes remains the same the measurement time will increase
proportionally.
D.4 Lower layer packet headers
Packet pair like methods are based on estimates from the
service time of a single probe, whereas those underlying
pathchar,clink and ACCSIG are based on differences of
service times. Because of this the additional headers of link
layers can affect packet pair based estimates, as investigated
in detail in [11], but they leave unaffected the results of the
pathchar variant methods, where the packet size change cancels
out. Thus, different methods see different values, an important
point to understand regardless of whether one prefers to
measure the bandwidth as seen by IP or the link layer.
Non-linearities
Another practical problem can arise when the delay is non-linear
in packet size, as shown in [15]. Such non-linearities are
typically the result of lower layer effects, such as layer 2 frag-
mentation. If the network layer payload is fragmented, the resulting
additional lower layer header(s) decreases the effective
rate seen by the network layer. Another example could be a
minimum frame size at layer 2, as for example in the case of
Ethernet. In such cases the spread of different packet sizes used
by pathchar and clink can be an advantage. ACCSIG can
also use a large(er) number of different sized probes, at the possible
expense of an increase in peak detection errors.
D.6 Forwarding time
If the time needed to generate the ICMP message in response
to the UDP packet is not constant, for example if it has a component
depending on packet size, it can lead to erroneous band-width
estimates. This will not only effect the estimate of a given
hop, but will propagate as TTL is increased, as the link rates
are extracted using a sum over previous TTL values. This error
is large on the last hop for both pathchar and clink as
discussed above.
III. PACKET QUARTETS
The methods of the previous section rely on the accurate timing
of returning ICMP packets. The resulting reliance on appropriate
router behaviour, plus the additional noise of the return
path(s), are key drawbacks. One would prefer to exploit TTL's
ability to access hops individually along the route in some other
way. The tailgater method described by Lai and Baker in
[1] is one way of doing this. It is derived from a deterministic
model of packet delay and consists of two phases. First, by sending
isolated probes and using linear regression on the minimum
delay for each size,
1= h and
are estimated.
In the second phase pairs of packets are sent, each consisting of
two packets sent back-to-back: a packet of the smallest possible
size, the tailgater, follows the largest possible (non-fragmented)
packet with a limited TTL. The goal is to cause the tailgater to
queue directly behind the large packet at each hop until the TTL
expires, enabling the accumulated service time to be measured
from that hop on. The final bandwidth estimates are determined
using minima filtering for each TTL, combined with the components
measured in the first phase.
Pacesetter packet, with a limited TTL
Probe packet, delivered to the receiver
Sender Receiver
Fig. 7. Packet quartets: pacesetter packets have limited TTL, while the probes
reach the receiver.
Drawbacks of the tailgater method include those related
to minima filtering, and the need to perform two phases. To
address these and other issues we introduce packet quartets (PQ)
(see figures (7) and (8)) as a flexible probe pattern underlying a
new family of link bandwidth estimation methods.
A packet quartet is formed by grouping any two of a sequence
of packet pairs, each consisting of a probe packet closely following
a pacesetter packet, which has a limited TTL. The pairs,
similarly to the probes of the ACCSIG method, are sent sufficiently
separated to avoid joining queues during the same busy
period. Within pairs the opposite is true, the probe and the pace-
setter are designed to be in the same busy period at each hop.
Practical limitations to achieving this, and their consequences,
are discussed in the next paragraph. Apart from the pair spac-
ing, packet quartets have six parameters: the two probe sizes
the two pacesetter sizes p s;i 1 and p s;i , and the last
hops that the pacesetters traverse before dropping out: hops A
and B. In what follows we discuss a four packet pattern as in
figure 7. It should not be forgotten however that when i is incremented
the order of A and B reverse, negating the accumulation
terms to produce the other half of the symmetric histograms, as
before.
SENDER RECEIVER
Packet #3
Packet #2
Packet #4
Packet #1
Hop A Hop B
Fig. 8. Packet quartets: pacesetter packets #1 and #3 have limited TTL, while
probes, packets #2 and #4, reach the receiver.
Within a pair, the following effect is important to understand.
Assume that the probe and its pacesetter are back-to-back and
that the latter has just arrived at an empty queue on hop h. Because
of the store and forward router behaviour, the probe will
not enter the queue until after an interval x h 1
controlled by
the previous hop, whereas the pacesetter will leave the queue after
a time x h
controlled by hop h. Thus, as pointed out in [1],
the ability of the pacesetter to guarantee that the probe queues
behind it is bounded by the ratio of the probe and pacesetter service
times, which depends on the ratio of the link rates. Thus,
the ratio of the packet sizes is both a significant design issue and
a practical limit of the applicability of these methods to high
speed links. However, separation of the pair members does not
invalidate packet quartets for three reasons. First, as the delay
variation based model will make clear, it is not essential that
the probes and pacesetters be back-to-back, the fundamental requirement
is that they share the same busy period. Second, the
separation may only persist over a few hops. If the probe catches
up to its pacesetter's busy period then the pair will resume its
role from that point. Finally, it is not neccessary that all pairs
stay together, but only that enough do so to make the target signature
large enough to be detected.
A. A delay variation based model
We adopt the convention that probes are those packets we observe
and measure, as they arrive at the receiver. As in section II,
we focus on the delay variation of the probes, which are indexed
by i. As pacesetter packets are (typically) dropped before reaching
the receiver, they are not classified as probes. Instead, they
manifest as special components of the waiting times that probes
experience, characterised by a known duration and persistence
across hops.
From equation (8) it is obvious that - (j)
i , the delay variation
up to hop j, can be expressed as:
Lindley's equation connects the waiting times of successive
packets. With an infinite buffer, it states
If we assume that probe i will enter
hop h to join the same busy period as its pacesetter, the probe's
waiting time becomes:
where c h
s;i is the aggregate service time of cross traffic packets
which arrived at hop h between the arrival of the pacesetter at
the instant h
s;i , and of the probe at h
. Since by definition the
arrival time to hop h is the departure time from hop h 1, the
last component can be rewritten as:
for all h > 1. Here we are assuming that the probe and its
pacesetter also share the same busy period at hop i 1.
Recursively applying equations (15), (17) and (18), the delay
variation of the probes of a general packet quartet becomes:
A
A
(w h
(w h
(w h
s;i is the interarrival time of the pacesetters at the first
hop.
The properties of the packet quartet delay variation are qualitatively
different depending on which of the following scenarios
the TTL values lie:
where H is the number of hops. The case with
corresponds to probes only being sent from the sender, which
was discussed in section II. In each of these regions of the quartet
design space, new methods will be defined. Their properties
and limitations will be compared against each other as well as
the methods of section II.
IV. PACKET QUARTET LINK ESTIMATION METHODS
In this section 5 canonical estimation approaches, PQ1, PQ2,
PQ3, PT1 and PT2, arising out of packet quartets, will be described
and their main properties derived. Simulations are used
to illustrate these properties, but are not meant to test performance
under all conditions. Similarly, the network comparisons
provide a basis for discussion of the primary features of the
methods, and of important issues such as invisible hops, rather
than a full scale validation. A comparison against the tail-
gater method of [1] is not provided as an implementation was
not available (we expected it to be included in the nettimer
package, however its documentation states that it is not [16]).
A. PQ1 and
SENDER RECEIVER
Packet #3
Packet #2
Packet #4
Packet #1
Acc. Component Sender -> A
Acc. Component A->Receiver
Hop A
Fig. 9. Packet quartets with 0 <
This TTL scenario is illustrated in figure (9). For it
equation (19) reduces to:
A
A
(w h
(w h
There are two service time, or more specifically accumula-
tion, components in this equation. The first is an accumulation
signature due to the difference in the size of the pacesetters, and
as such it is created between hop #1 and A, after which the pace-
setters are dropped. The second is due to the difference in the
size of the probes, and is created from hop A to the receiver.
If both of these were active it would be difficult to determine
which term had contributed in what proportion to the resulting - i
value. Fortunately, they can be selectively eliminated by choosing
probes or pacesetters to be of equal size. This leads to two
more specialised probe patterns which form the basis of two new
bandwidth estimation methods.
We first define the methods, then compare them and investigate
their properties. For the remainder of this subsection we
collect the non service time components of equation (20) into
A.1 Method PQ1
Setting a constant, for each probe cancels the second
accumulation component, leaving the defining equation for
method PQ1:
A
Equations (21) and (10) differ only in their noise terms. As a
result, link bandwidth estimates of individual hops can be made
in exactly the same way as for ACCSIG.
A.2 Method PQ2
Setting constant for each pacesetter cancels the
first accumulation component, leaving the defining equation for
method PQ2:
Equations (21) and (22) are formally very similar, but the
methods defined by them have some very different properties.
A.3 PQ1 and PQ2 compared
The essential difference with PQ2 is that, to determine the
bandwidth of hop h ttl , the difference of delay variations from
stages h ttl and h ttl+1 must be taken, instead of from h ttl
and h ttl 1 as for PQ1, ACCSIG and the pathchar variant
methods.
As one important consequence of this, the link bandwidth estimates
of segments including invisible hops will be different for
the two methods. While the estimates from PQ1 will be formed
as described in equation (13) for ACCSIG, those from PQ2 will
obey
This effect is shown in figure (10), from which it is easily seen
Link level ("real")
path representation
path representation
IP,1 IP,2
level
SENDER RECEIVER
LAN Switch
Hop #3
Hop #2
Hop #1
IP Router
Packets with IP-TTL=1
dropped here
pathchar,clink,ACCSIG and PQ1 based estimate
PQ2 based estimate
Fig. 10. Invisible hops - PQ1 and PQ2 produce different hop bandwidth estimates
in the presence of an invisible hop
that the bandwidth of the last hop, hop 2, of the first segment,
can be measured. This is true more generally: the
last hop of segments containing invisible hops can be accessed.
In effect this exploits a kind of 'edge effect' arising from the
differences in the two equations.
The second important consequence regards the peak detection
accuracy needed. Recall that measuring the distance between
the symmetric peaks in the - i histogram is the basis of the accumulation
signature based estimate here just as for ACCSIG. In
PQ2 the peak detection is based on the difference in the probe
sizes. However this is necessarily small, since by the definition
of packet quartets, probes are significantly smaller than the
pacesetters, and this will also be true of their difference. As a result
the distance between the peaks for PQ2 will be significantly
smaller than that for PQ1 on the same hop, which makes PQ2
intrinsically less accurate than PQ1.
The third consequence again favors PQ1. Comparing
equation (20) to equation (9), we find the additional noise component
c A
describing the cross traffic arriving between
probes and their pacesetters. On heavily loaded hops this term
can significantly bias the estimates. Now for simplicity assume
stationary cross traffic on hop A and let the components of each
pair be back-to-back on leaving hop A 1. Since for PQ1 the
probe size is constant, the time period x A 1
i in which the cross
traffic may infiltrate between the probe and its pacesetter on hop
A is the same for all i. The cross traffic components will then be
statistically identical and tend to cancel each other. For PQ2 this
is not the case, as the probes sizes are different (in fact deliberately
as different as possible to minimise sensitivity to peak detection
errors). For PQ2 this additional asymmetric cross traffic
breaks the symmetry of the distributions of - causing
significant problems for median based peak detection. Other
peak detection methods should therefore be used under heavy
network load.
A.4 PQ1, PQ2 simulation results
Simulations results of a 6 hop route with both zero and moderate
cross traffic are shown in table II. The accuracy of PQ2
drops sharply in the presence of cross traffic, whereas PQ1 performs
extremely well in both cases.
Similar simulations of the same 6 hop route, but with two
hops made invisible, leaving 4 segments, is given in table III.
Just as for ACCSIG, the results for segments may not correspond
directly to the known link rates, but are predicted by the
'hop mixing' equations, equation (13) for PQ1 (and ACCSIG)
and equation (23) for PQ2. Of these, that of PQ1 is easier to
interpret. From table III (no CT) we see that for single hop segments
we recover the link rate, and for segments with two hops
of the same rate, the method recovers half of the rate. Although
less obvious, with no cross traffic PQ2 also gives values which
obey its respective mixing equation. Method PQ2 however performs
less well under cross traffic, as can be seen by comparing
the method's no CT and with CT columns of table III. Large
differences between PQ1 and PQ2 can be seen as evidence for
the presence of invisible hops.
A.5 PQ1, PQ2 measurement based comparisons
Results for selected segments of the 2 netork routes our new
methods are shown in figures 3 and 5.
The PQ1 based estimates are in good agreement with
pathchar, clink and ACCSIG results. The main difference
with PQ1 is that the number of noise components only depends
on the number of hops, rather than varying with TTL across
the different stages of the measurement. The aggregate noise
is therefore lower, roughly speaking, for the second half of the
route, due to the lack of backward path. This is a great advantage
on long routes. The above feature of PQ1 is shared by PQ2.
Another is that neither rely on receiving ICMP error messages,
which eliminates forwarding time errors entirely.
The cost of these advantages is a higher sensitivity to peak
detection errors. For example, the PQ1 measurements were performed
using 1500 and 730 byte pacesetter packets followed by
probes, and the PQ2 measurements employed 1500 byte
pacesetters followed by 40 and 100 byte probes. With these val-
ues, a given peak detection error would cause an estimation error
for PQ1 approximately twice as large as that for ACCSIG, and
this ratio increases to approximatly 20 for PQ2. PQ2's higher
sensitivity to peak detection errors is demonstrated by the first
hop estimate of the two hop route (fig. 5), which is off by 50%
from the 100Mbps expected from equation (23).
The PQ2 based estimates of the 1 st and 2 nd , and 8 th and 9 th
segments of the international route, by their inconsistency with
Nominal PQ1 PQ2 PQ3 PT1 PT2
link bandwidth no CT with CT no CT with CT no CT with CT no CT with CT no CT with CT
100 100 100 20:7 100 143 100 143 inf 0
1. 100Mbps 100 100 99.99 15.8 100 120 100 120 0.4 0
2. 10Mbps
4. 2Mbps 2
5. 2Mbps 2 2.02 2 3.08 2 2.00 2 1.98 2 0.24
II
SIMULATION BASED LINK BANDWIDTH ESTIMATES OF A 6 HOP ROUTE. FOR EACH METHOD: COLUMN #1 NO CROSS TRAFFIC, COLUMN #2 60% CROSS
TRAFFIC LOAD ON HOP #4, 30%ON THE OTHER HOPS.
PQ1 as discussed above, reveal the invisible hops we know (seg-
ments 1, 2) or expect (9) are there.
B.
SENDER RECEIVER
Packet #3
Packet #2
Packet #4
Packet #1
Hop A Hop B
Acc. Component Sender -> A
Edge Component A Edge Component B
Acc. Component B->Receiver
Acc. Component
Fig. 11. Packet quartets with 0 < A < B
This TTL scenario is illustrated in figure (11). Unlike the
case where A = B, here even when the probes sizes are chosen
and the pacesetter sizes also, p an accumulation
term remains, and the delay variation equation (19
reduces to:
where N A;B
denotes all the non service time terms. In the special
case of becomes the defining equation of
method PQ3:
With this method in effect a single hop has been isolated
through an edge effect. It is also different in that the histograms
have only a single peak if we set t . If we assume
that A is known, and the noise term has symmetric char-
acteristics, then an estimate of the unique peak ^
- can be easily
obtained, leading to:
The link bandwidth of the first hop, the access link rate of the
sender, is usually known, allowing the remaining bandwidths to
be estimated recursively from equation (25) (if it is not known
it can be estimated by starting the measurement from A = 0,
which results in -
. Further discussion of the
cases is deferred until section IV-C). Note that the estimate
from one stage of the measurement is used to obtain the
estimate for the next, which can potentially lead to a magnification
of errors. This is not the case with the previously discussed
pathchar variants, nor with PQ1 and PQ2, as there the estimates
follow directly from the difference of the estimates of
only two measurement stages.
PQ3's estimates for segments with invisible hops will suffer
from estimation errors of a different nature to those seen so far,
as can be understood from setting
equation (24). From the difference of equations (24) and (25),
the bandwidth estimate in such a case, the hop mixing equation,
is seen to be: IP;h ttl
1+h ttl
which is similar to equation (13), and for sufficiently small p=p s
ratio, it would give the same results. For the packet sizes used
in our measurements, 1500 byte pacesetters and 40 byte probes,
this ratio is 0:026.
B.1 PQ3 simulation results
Returning to the 6 hop simulation comparison in table II, we
find that PQ3 performs similarly to PQ1, but with some positive
bias for large link rates. In the case of invisible hops in table III
it is again very consistent with PQ1 with no cross traffic, but has
more errors with it.
Nominal link bandwidth PQ1 PQ2 PQ3 PT1
(Segment details) no CT with CT no CT with CT no CT with CT no CT with CT
100 100 9:09 15:8 100 143 100 143
1. 100Mbps 100 100 9.09 10.5 100 120 100 120
100 100 9:09 7:8 100 103 100 103
5 5:06 1:67 1:83 5:07 5:18 5:07 5:09
2. 10Mbps 5 5.04 1.67 1.74 5.07 5.15 5.07 5.04
3. 2Mbps 1
III
SIMULATION BASED LINK BANDWIDTH ESTIMATES OF THE SAME 6 HOP ROUTE, 2 HOPS (2ND AND 4TH) MADE INVISIBLE. FOR EACH METHOD: COLUMN
#1 NO CROSS TRAFFIC, COLUMN #2 60% CROSS TRAFFIC LOAD ON HOP #3, ELSE 30%.
B.2 PQ3 measurement based comparison
Returning to figures 3 and 5, we find that, again, the estimates
are in good agreement with pathchar, clink and ACCSIG
results for link bandwidths up to approximately 30Mbps, however
the differences seem to be slightly larger than in the case of
PQ1.
The differences could be caused by the somewhat different
estimation of invisible segments, as well as by the fact that estimates
from equation (25) are affected by lower layer packet
headers in a similar way to packet pair based estimates, as discussed
in section II-D.4.
PQ3's sensitivity to peak detection errors is similar to that of
ACCSIG, which is an advantage over PQ1. Another advantage
is that due to the fixed and equal probe sizes, and pacesetter
sizes, the achievable pacesetter/probe ratio is higher than that of
PQ1 and PQ2, allowing the measurement of a wider spectrum
of hop bandwidths. However, the possible magnification of estimation
errors is an important disadvantage of this method.
C. PT1 and
The special case of packet quartets with
to only one pacesetter packet being sent out from the sender,
as illustrated in figure (12), in effect reducing the quartets to
triplets.
SENDER RECEIVER
Packet #3
Packet #2
Packet #4
Packet #1
Acc. Component B+1->Receiver
Acc. Component Sender -> B
Edge Component B
Fig. 12. Packet quartets with
As before, packet sizes can be selected to cause the cancellation
of the accumulation terms. However it is impossible to
cancel both terms simultaneously, unless the probes and their
pacesetters have the same size, which is incompatible with the
latter's aim of causng the probes to queue, of controlling their
pace. Hence, two methods, corresponding to the cancellation of
only one accumulation term at a time, will be defined.
Setting for each probe cancels the second accumulation
component, leaving the defining equation for method PT1:
The properties of this method are very similar to those of PQ3.
The link bandwidths in this case can be estimated from the estimate
of the unique delay variation peak ^
as:
where again we have set t . The comment
regarding the potential accumulation of errors in the case of
method PQ3 is also valid here, and the effect of invisible hops on
PT1's bandwidth estimates, equation (27), is exactly the same.
Note, that PT1 holds a similar relation to the tailgater
method as ACCSIG does to pathchar and clink. As a con-
sequence, most of the causes of error in PT1's estimates are
shared with tailgater. The most important differences between
the two are the result of the delay variation foundation of
replacing delay: there is no need for two different probing
phases, nor for minimum filtering or linear regression.
C.2 Method PT2
Setting constant for each pacesetter results
in the cancellation of the first accumulation component, leaving
the defining equation for method PT2:
Unfortunately, as the result of increased sensitivity to the peak
detection errors and the propagation of these errors, this method
is useless in practice as we now show. For completeness we
include here the hop mixing equation, which is similar but different
to equation (23): IP;h ttl
1+h ttl+1
C.3 PT1, PT2 simulation results
We return a final time to the 6 hop simulation comparison
in table II. The striking result is that even with no cross traf-
fic, PT2's sensitivity and error propagation causes its estimates
to deteriorate rapidly. The initial peak detection error in this
case has its origin in the limited resolution of the simulation
timestamps, which was chosen to be equivalent to the 2
resolution of the timestamp representation of our measurement
infrastructure. This tiny error becomes exponentially amplified.
In constrast, the results of PT1 are generally very good, and very
consistent with PQ3 as expected. The same is true for the invisible
hop results of table III for PT1.
C.4 PT1 measurement results
We only present measurement results in the case of PT1, as
performs so poorly. Again, from figures 3 and 5 we find that
the estimates are very similar to those of PQ3. The comments
regarding the effect of the link layer headers, the sensitivity to
peak detection errors, and the propagation of errors, also hold.
Note that PT1, in conjunction with other methods, has the
potential to detect invisible hops. Let us assume the knowledge
of the
1= h of the ttl segments, for example using PQ1. In
this case, x B
remains the only unknown of equation (28), which
corresponds to the service time of the probes on the last hop
of the route segment containing an invisible hop or hops, ie.,
the service time of an invisible hop. However, the sensitivity to
peak detection errors becomes similar to that of PQ2, although
the propagation of the errors won't affect the estimates of B for
fixed. However, the fact that PT1's estimate can potentially
suffer from the link layer header error, means that PQ2 seems
a more efficient tool for detecting the invisible hop anatomy of
segments.
V. INVESTIGATING THE BOTTLENECK
Based on the properties discussed above of PQ1 and PQ2
in estimating segments with invisible hops (see equations (13)
and (23)) and the results for segments 8 to 10 of the international
route (PQ1: [10:1; 0:98; 22:2] Mbps, and PQ2: [2:13; 1:9; 27:4]
Mbps), we concluded that the ninth hop consists of two
2Mbps hops. This conclusion is supported by independent measurements
made using the fundamentally different packet-pair
based methods [11].
VI. CONCLUSIONS
A variety of link bandwidth measurement methods were presented
that exploit the ability of the IP-TTL field to target hops
within a route to enable measurements over multiple links. The
approach is novel in its use of delay variation, and histogram
based peak detection using medians, rather than delay series and
filtering on minima. The information of each probe packet is
used, resulting in greater efficiency, that is using less probes for
fixed accuracy. The first new method presented, ACCSIG, uses
returning ICMP messages like pathchar and clink, and
gives similar results, but is more efficient. ICMP based methods
however suffer from greater noise due to a longer round-trip
path, and router dependent processing delays. To avoid these,
packet quartets were introduced: a 6 parameter probe stream
class incorporating pairs consisting of probes trapped behind
larger pacesetter packets with limited TTL values. From this
class, 5 new measurement methods were defined: PQ1, PQ2,
PQ3, PT1 and PT2, their main characteristics analysed, and their
performance illustrated and evaluated with simulation and accurate
network measurements over two routes. One of these, PT1,
is closely related in terms of TTL use to the existing tail-
gater method, but with the advantages of being delay variation
based.
The properties of the new methods are summarized in table VI
with respect to several metrics of practical importance, which
were described in detail. Rows 1-4 give rankings, where
best.
We analysed in detail the consequences of invisible hops: the
fact that TTL detects IP hops or segments, which often consist of
multiple link layer hops. The methods were shown to fall into 2
fundamental categories depending on their behaviour over seg-
ments. The first group includes Pathchar (and its variants),
ACCSIG, PQ1, PQ3 and PT1, while PQ2 and PT2 belong to the
second group. Invisible hops can be detected by comparing the
results from a member of each group. The last invisible hop in
the segment can be accessed by cancelling terms between the
methods, for example by combining results from PQ1 and PQ2.
Each group can be divided further into two sub-categories, defined
by their hop mixing equations, summarised in table VI.
In summary, method PQ1 is the best overall, offering accurate
estimates with few probes and low noise. ACCSIG is also
efficient and accurate. PQ2 can also be useful but is less ac-
curate, whereas PQ3 is considerably less accurate. PT1 is very
similar to PQ3 but even less accurate. Finally PT2 is very sensitive
and magnifies errors, and is useless in most situations.
Performance Metric ACCSIG PQ1 PQ2 PQ3 PT1 PT2
Sensitivity to peak 2 3
detection errors (1-4)
Sensitivity to lower
layer headers (1-3)
propagation
Sensitivity to
forwarding errors (1-2)
Mixing equation
IV
COMPARISON OF NEW METHODS.
Further work is needed to fully characterise and compare the
methods' performance under a variety of traffic conditions and
parameter settings. In addition, methods to automatically detect
invisible hops and to fully resolve them are the subject of
ongoing work.
--R
"Measuring link bandwidths using a deterministic model of packet delay,"
"What do packet dispersion techniques measure?,"
"Using pathchar to estimate internet link characteristics,"
"Congestion avoidance and control,"
"Characterizing end-to-end packet delay and loss in the inter- net,"
"Measuring bottleneck link speed in packet-switched networks,"
"End-to-end internet packet dynamics,"
"Measuring bandwidth,"
"Multifractal cross-traffic estimation,"
"End-to-End Available Band- Measurement Methodology, Dynamics, and Relation with TCP Throughput,"
"The packet size dependence of packet-pair like methods,"
"On the scope of end-to-end probing methods,"
"A precision infrastructure for active probing,"
"PC based precision timing without GPS,"
"patchar - a tool to infer characteristics of internet paths,"
"Nettimer: A tool for measuring bottleneck link bandwidth,"
--TR
Congestion avoidance and control
Measuring bottleneck link speed in packet-switched networks
End-to-end Internet packet dynamics
Using pathchar to estimate Internet link characteristics
Measuring link bandwidths using a deterministic model of packet delay
PC based precision timing without GPS
End-to-end available bandwidth
--CTR
A. Novak , P. Taylor , D. Veitch, The distribution of the number of arrivals in a subinterval of a busy period of a single server queue, Queueing Systems: Theory and Applications, v.53 n.3, p.105-114, July 2006
Guojun Jin , Brian L. Tierney, System capability effects on algorithms for network bandwidth measurement, Proceedings of the 3rd ACM SIGCOMM conference on Internet measurement, October 27-29, 2003, Miami Beach, FL, USA
Matthew Luckie , Tony McGregor, Path diagnosis with IPMP, Proceedings of the ACM SIGCOMM workshop on Network troubleshooting: research, theory and operations practice meet malfunctioning reality, September 03-03, 2004, Portland, Oregon, USA
Sridhar Machiraju , Darryl Veitch , Franois Baccelli , Jean Bolot, Adding definition to active probing, ACM SIGCOMM Computer Communication Review, v.37 n.2, April 2007
Constantinos Dovrolis , Parameswaran Ramanathan , David Moore, Packet-dispersion techniques and a capacity-estimation methodology, IEEE/ACM Transactions on Networking (TON), v.12 n.6, p.963-977, December 2004
Sridhar Machiraju , Darryl Veitch, A measurement-friendly network (MFN) architecture, Proceedings of the 2006 SIGCOMM workshop on Internet network management, p.53-58, September 11-15, 2006, Pisa, Italy
Neil Spring , David Wetherall , Thomas Anderson, Reverse engineering the Internet, ACM SIGCOMM Computer Communication Review, v.34 n.1, January 2004
Mahajan , Neil Spring , David Wetherall , Thomas Anderson, User-level internet path diagnosis, Proceedings of the nineteenth ACM symposium on Operating systems principles, October 19-22, 2003, Bolton Landing, NY, USA | active probing;cross-traffic;delay variation;bottleneck bandwidth;TTL;internet measurement |
637321 | The monadic theory of morphic infinite words and generalizations. | We present new examples of infinite words which have a decidable monadic theory. Formally, we consider structures {N, >, P} which expand the ordering {N, >} of the natural numbers by a unary predicate P; the corresponding infinite word is the characteristic 0-1-sequence xp of P. We show that for a morphic predicate P the associated monadic second-order theory MTh{N, >, P) is decidable, thus extending results of Elgot and Rabin (1966) and Maes (1999). The solution is obtained in the framework of semigroup theory, which is then connected to the known automata theoretic approach of Elgot and Rabin. Finally, a large class of predicates P is exhibited such that the monadic theory MTh(N, >, P) is decidable, which unifies and extends the previously known examples. | Introduction
In this paper we study the following decision problem about a xed !-word x:
Given a Buchi automaton A, does A accept x?
If the problem (Acc x ) is decidable, this means intuitively that one can use x as
external oracle information in nonterminating nite-state systems and still keep
decidability results on their behaviour. We solve this problem for a large class of
!-words, the so-called morphic words and some generalizations, complementing
and extending results of Elgot and Rabin [11] and Maes [15].
The problem (Acc x ) is motivated by a logical decision problem regarding
monadic theories, starting from the fundamental work of Buchi [8] on the equivalence
between the monadic second-order theory MThhN;<i of the linear order
hN; <i and !-automata (more precisely: Buchi automata). Buchi used this reduction
of formulas to automata to show that MThhN;<i is decidable. The
decidability proof is based on the fact that a sentence of the monadic second-order
language of hN; <i can be converted into an input-free Buchi automaton
A such that holds in hN; <i i A admits some successful run; the latter is
easily checked.
It was soon observed that Buchi's Theorem is applicable also in a more
general situation, regarding expansions of hN; <; P i of the structure hN; <i by
a xed predicate P N. Here one starts with a formula (X) with one free set
variable and considers an equivalent Buchi automaton A over the input alphabet
is true in hN; <i with P as interpretation of X i
A accepts the characteristic word xP over B associated with P (the i-th letter
of xP is 1 In other words: The theory MThhN; <; P i is decidable if
one can determine, for any given Buchi automaton A, whether A accepts xP .
Elgot and Rabin [11] followed this approach via the decision problem (Acc xP )
and found several interesting predicates P such that MThhN; <; P i is decidable,
among them the set of factorial numbers n!, the set of k-th powers n k for any
xed k, and the set of k-powers k n for any xed k.
Elgot and Rabin (and later also Siefkes [18]) used an automata theoretic anal-
ysis, which we call contraction method, for the solution of the decision problem
(Acc xP ). The idea is to reduce the decision problem to the case of ultimately
periodic !-words (which is easily solvable). Given P as one of the predicates
mentioned above, Elgot and Rabin showed that for any Buchi automaton A,
the 0-sections in xP can be contracted in such a way that an ultimately periodic
!-word is obtained which is accepted by A i xP is accepted by A. For the
ultimately periodic word , one can easily decide whether it is accepted by A.
More recently, A. Maes [15] studied \morphic predicates", which are obtained
by iterative application of a morphism on words (see Section 2 for deni-
tions). The morphic predicates include examples (for instance, the predicate of
the Fibonacci numbers) which do not seem to be accessible by the Elgot-Rabin
method. Maes proved that for any morphic predicate P , the rst-order theory
decidable, and he also introduced appropriate (although spe-
cial) versions of morphic predicates of higher arity. It remained open whether
for each morphic predicate P , the monadic theory MThhN; <; P i is decidable.
In the present paper we answer this question positively, based on a new (and
quite simple) semigroup approach to the decision problem (Acc xP In Section 2
we show that for morphic predicates P the problem (Acc xP ) is decidable. As
a consequence, we nd new examples of predicates P with decidable monadic
theory MThhN; <; P i. Prominent ones are the Fibonacci predicate (consisting
of all Fibonacci numbers) and the Thue-Morse word predicate (consisting of
those numbers whose binary expansion has an even number of 1's).
In the second part of the paper, we embed this approach into the framework
of the contraction method. This method is shown to be applicable to predicates
which we call \residually ultimately periodic". We prove two results: Each mor-
phic predicate is residually ultimately periodic, and a certain class of residually
ultimately periodic predicates shares strong closure properties, among them
under sum, product, and exponentiation. This allows to obtain many example
predicates P for which the monadic theory MThhN; <; P i is decidable.
It should be noted that for certain concrete applications (such as for the
Fibonacci predicate) the semigroup approach is much more convenient than an
explicit application of the contraction method, and only an analysis in retrospect
reveals the latter possibility. Also morphic predicates like the Thue-Morse word
predicate are not approachable directly by the contraction method, because
there are no long sections of 0's or 1's.
Let us nally comment on example predicates P where the corresponding
Buchi acceptance problem (Acc xP ) and hence MThhN; <; P i is undecidable, and
give some comments on unsettled cases and on related work.
First we recall a simple recursive predicate P such that MThhN; <; P i is
undecidable. For this, consider a non-recursive, recursively enumerable set Q
of positive numbers, say with recursive enumeration m
recursive but
even the rst-order theory FThhN; <; P i is undecidable: We have
some element x of P , the element x is the next element after x in P , a
condition which is expressible by a rst-order sentence k over hN; <; P i. Buchi
and Landweber [9] and Thomas [19] determined the recursion theoretic complexity
of theories MThhN; <; P i for recursive P ; it turns out, for example, that
for recursive P the theory MThhN; <; P i is truth-table-reducible to a complete
-set, and that this bound cannot be improved. The situation changes when
recursive predicates over countable ordinals are considered (see [22]). In [20] it
was shown that for each predicate P , the full monadic theory MThhN; <; P i is
decidable i the weak monadic theory WMThhN; <; P i is (where all set quantiers
are assumed to range only over nite sets). However, there are examples P
such that the rst-order theory FThhN; <; P i is decidable but WMThhN; <; P i
is undecidable ([19]).
In the present paper we restrict ourselves to expansions of hN; <i by unary
predicates. For expansions hN; <; fi by unary functions f , the undecidability of
MThhN; <; fi arises already in very simple cases, like for the doubling function
2n. For a survey on these theories see [16].
On the other hand, it seems hard to provide mathematically natural recursive
predicates P for which MThhN; <; P i is undecidable. A prominent example of
a natural predicate P where the decidability of MThhN; <; P i is unsettled is
the prime number predicate. As remarked already by Buchi and Landweber in
[9], a decidability proof should be dicult because it would in principle answer
unsolved number theoretic problems like the twin prime hypothesis (which states
that innitely many pairs primes exist). The only known result is
that MThhN; <; P i is decidable under the linear case of Schinzel's hypothesis
[4].
Part of the results of the present paper has been presented at the conference
MFCS'2000 [10].
automata over morphic predicates
A morphism from A to itself is an application such that the image of any word
an is the concatenation (a (an ) of the images of its letters. A
morphism is then completely dened by the images of the letters. In the sequel,
we describe morphisms by just specifying the respective images of the letters as
in the following example
Let A be a nite alphabet and let be a morphism from A to itself. For
any integer n, we denote n the composition of n copies of . Let
the sequence of nite words dened by (a) for any integer n. If the rst
letter of (a) is a, then each word xn is a prex of xn+1 . If furthermore the
sequence of length jx n j is not bounded, the sequence converges to an
innite word x which is denoted by ! (a). The word x is a xed point of the
morphism since it satises
Example 1 Let consider the morphism given by
The words are respectively equal to
It can be easily proved by induction on n that n+1
Therefore, the xed point ! (a) is equal to the innite word abc 2 bc 4 bc 6 bc
An innite word x over B is said to be morphic if there is a morphism
from A to itself and a morphism from A to B such that In
the sequel, the alphabet B is often the alphabet and the morphism
is letter to letter.
The characteristic word of a predicate P over the set N of non-negative
integers is the innite word over the alphabet B dened by b
predicate is said to be morphic i its
characteristic word is morphic.
These denitions are illustrated by the following two examples.
Example 2 Consider the morphism introduced in the preceding example and
the morphism given by
The morphic word actually the characteristic word
of the predicate Ng. This can be easily proved using the equality
1. The square predicate is therefore morphic.
It will proved in the sequel that the class of morphic predicates contains all
predicates of the form fn k jn 2 Ng and fk n jn 2 Ng for any xed integer k.
Example 3 Consider the morphism from B
to B
dened by
The xed point is the characteristic word of the
predicate P with the binary expansion of n contains an even number
of 1. The xed point ! (1) is the well-known Thue-Morse word. We refer the
reader to [5] for an interesting survey on that sequence and related works of A.
Thue.
Recall that a Buchi automaton is an automaton
is a nite set of states, E Q A Q is the set of transitions and I and F
are the sets of initial and nal states. A path is successful if it starts in an
initial state and goes innitely often through a nal state. An innite word is
accepted if it is the label of a successful path. We refer the reader to [21] for a
complete introduction. Now we can state the main result and in the corollary,
its formulation in the context of monadic theories:
Theorem 4 Let (a)) be a xed morphic word where : A ! A and
are morphisms. For any Buchi automaton A, it can be decided
whether x is accepted by A.
As explained in the introduction, the theorem can be transferred to a logical
decidability result, invoking Buchi's Theorem [8] on the equivalence between
monadic formulas over hN; <i and Buchi automata:
Corollary 5 For any unary morphic predicate P , the monadic second-order
theory of hN; <; P i is decidable.
Proof (of Theorem automaton. Dene the
equivalence relation over A by
F
F
means that there is a path from p to q labeled by u and p u
F
means that there is a path from p to q labeled by u which hits some nal state.
This equivalence relation captures that two nite words have the same behavior
in the automaton with respect to the Buchi acceptance condition. It was already
introduced by Buchi in [8].
Denote by the projection from A to A = which maps each word to its
equivalence class.
The equivalence relation is a congruence of nite index. Indeed, for any
words u, v, u 0 and v 0 , the following implication holds: If
This property allows us to dene a product on the classes which endows
the set A = with a structure of nite semigroup. The projection is then a
morphism from A onto A =.
Furthermore, for any xed states p and q, there are at most three possibilities
for any word u: Either there is a path from p to q through a nal state, or there
is a path from p to q but not through a nal state or there is no path from p
to q labeled by u. This proves that the number of classes is bounded by 3 n 2
where n is the number of states of the automaton.
The following observation gives the main property of the congruence .
Suppose that the two innite words x and x 0 can be factorized
and x
k for any k 0. Then x is accepted by A
accepted by A. Indeed, suppose that x is the label of a successful path.
This path can be factorized
where q 0 is initial and where the nite path q k
a nal state for
innitely many k. Since u k u 0
k for any k 0, there is another path
where the nite path q k
a nal state whenever q k
does.
This proves that x 0 is also the label of a successful path and that it is accepted
by A. In particular, if an innite word x can be factorized
that u 1 u 2 , then x is accepted by A i u
1 is also accepted by A.
Since a is the rst letter (a), the word (a) is equal to au for some nonempty
nite word u. It may be easily veried by induction on n that for any integer n,
one has
The word can be factorized
We claim that there are two positive integers n and p such that for any k n,
the relation u k u k+p holds. This relation is equivalent to (u k
morphism from A into a semigroup is completely determined by the images of
the letters. Therefore, there are nitely many morphism from A into the nite
semigroup A =. This implies that there are two positive integers n and p such
that This implies that
k greater than n, and thus u k u k+p . Note that these two integers n and p
can be eectively computed. It suces to check that ( n (b)) ( n+p (b)) for
any letter b of the alphabet A.
Dene the sequence (v k ) k0 of nite words by v
1. The word x can be factorized
and the relations v 1 v 2 v 3 hold. This proves that the word x is accepted
by the automaton A i the word v
1 is accepted by A. This can be obviously
be decided.
3 A large class of morphic predicates
The purpose of this section is to give a uniform representation of a large class of
morphic predicates: We will show that each predicate
k 0 and a polynomial Q(n) with non-negative integer values is morphic. This
supplies a large class of predicates P where the decision problem (Acc xP ) and
hence the monadic theory of hN; <; P i is decidable. In particular, the classical
examples fk xed k in N of [11] are covered
by this.
As a preparation, we shall develop a sucient condition on sequences
to dene a morphic predicate. It will involve the notion of a N-rational sequence.
These considerations will also show that the Fibonacci predicate fFn
(where
We refer the reader to [17] for a complete introduction and to [3] for a survey
although we recall here the denitions.
Definition 6 A sequence (un ) n0 of integers is N-rational if there is a graph G
allowing multiple edges, sets I and F of vertices such that un is the number of
paths of length n from a vertex of I to a vertex of F . The graph G is said to
recognize the sequence (un ) n0 .
An equivalent denition is obtained by considering non-negative matrices. A
sequence (un ) n0 is N-rational i there is a matrix M in N kk and two vectors
L in B 1k and C in B k1 such that un = LM n C. It suces indeed to consider
the adjacency matrix M of the graph and the two characteristic vectors of the
sets I and F of vertices. It also possible to assume that the two vectors L and
respectively belong N 1k and N k1 instead of B 1k and B k1 since the class
of N-rational sequences is obviously closed under addition. A triplet (L; M;C)
such that un = LM n C is called a matrix representation of the sequence
and the integer k is called the dimension of the representation. The following
example illustrates these notions.
Example 7 The number of successful paths of length n in the graph pictured in
Figure
1 is the Fibonacci number Fn where F
This shows that the sequence (Fn ) n0 is N-rational. This sequence has the
following matrix representation of dimension 2
which can be deduced from the graph of Figure 1.
Figure
1: A graph for the Fibonacci sequence
We now state the main result of this section.
Theorem 8 Let un be a sequence of non-negative integers and let
un be the sequence of dierences. If there is some integer l such that dn 1 for
any n l and such that the sequence (d n ) nl is N-rational, then the predicate
As a rst illustration of the Theorem, reconsider the predicate of the Fibonacci
Each dierence is equal to Fn 1 , it
obviously satises dn 1 for n 1, and the sequence (Fn ) n0 is N-rational as
shown in Example 7. So the Fibonacci predicate is morphic.
With the following corollary we obtain a large collection of morphic pred-
icates, including the predicates of the form fn k
considered in [11].
Corollary 9 Let Q be a polynomial such that Q(n) is integer for any integer n
and let k be a positive integer. The predicate
0g is morphic.
The proof of the corollary is entirely based on the following lemma.
a polynomial with a positive leading coecient such that
Q(n) is integer for any integer n. Let k be a positive integer and let un be
dened by un = Q(n)k n . There is a non-negative integer l such that the sequence
Note that such a polynomial may have non-integer coecients as the polynomial
Proof Let d be the degree of Q. We claim that there are then a non-negative
integer l and positive integers a a d such that for any n,
d
a i
Let Q l (n) be the polynomial Q(n + l). Since Q l (n) is integer for any integer n,
the polynomial Q l (n) is equal to a linear combination of the binomials with
integer coecients (see [12, p. 189]). There is a unique sequence a
of d integers such that for any integer n
d
a l;i
Since the binomials satisfy the well-known relation n+1
, it
follows that the coecients a l;i satisfy the following relation. For any integer l,
one has a l+1;d = a l;d and a a l;i+1 for i < d. Since the leading
coecient of Q l is positive, the coecient a l;d is also positive for any l. Using
the relation on the coecients on the a l;i , it can be proved by induction on the
dierence d i that for any i d, there is an integer l i such that a l;i is positive
for any l l i . The integer l dened the required
property.
We now dene the matrix M and the vectors L and C as follows. The
vector L is the vector of dimension d
d. The matrix M is the square matrix of dimension d dened by
d. The
vector C is the the vector of dimension d
It is pure routine to prove by induction on n that LM n is the vector Ln given
by L
for 0 i d. Therefore un+l is equal to LM n C for any
and the sequence (un+l ) n0 is N-rational.
The proof of Theorem 8 needs some preliminary result that we now state.
This lemma makes easier the proof that certain predicates are morphic. It
essentially states that the property of being morphic is preserved by shifting
and by changing a nite number of values. In particular, if two predicates P
and P 0 coincide for almost every n, then P is morphic i P 0 is morphic.
Lemma 11 Let P be a predicate and let R be a nite set of integers. Let k be
a non-negative integer and let P 0 be the predicate R [
is morphic i P 0 is morphic.
Proof Let P be a morphic predicate. Assume that the characteristic word xP
of P is equal to ( ! (a)) where and are respectively morphisms from A
into itself and from A into B . Let x be the xed point ! (a) and let u be the
nite word such that (a) = au. The innite word x can be then factorized
We rst prove that both predicates P
are also morphic. Let A 0 be the alphabet A [ where a 0 and a 1 are two
new symbols. Dene the morphism 0 from A 0 into itself by 0 (a
any b in A. It is clear that the xed point
Dene the morphism 0 from A 0 into B by 0 (a 1
any b in A. If 0 (a 0 ) is set to 0, then the word is the characteristic
word of P 0 and if 0 (a 0 ) is set to 1, then the word is the characteristic
word of P 00
We now prove that there is a positive integer k such that the predicate
be the alphabet A [
where a 0 is a new symbol. Dene the morphism 0 from A 0 into itself by
any b in A. It is clear that the xed point
Let k be the length of u. Dene the morphism 0 from A 0 into B by (a
any b in A. The word 0 is the characteristic
word of P 0 .
The claimed result follows then easily from the two previous results.
The following two results will be used in the proof of Theorem 8. The rst
result is due to Schutzenberger. We refer the reader to [17, Thm II.8.6] and [6,
Thm V.2.1] for complete proofs.
Theorem 12 (Sch utzenberger 1970) Let (un ) n0 be a N-rational sequence
such that un 1 for any n. The sequence (un 1) n0 is also N-rational.
For the next result, we need the notion of a D0L-sequence. A N-rational
sequence (un ) n0 of integers is said to be a D0L-sequence if it can be recognized
by a graph all of whose states are nal. Equivalently, a N-rational sequence
(un ) n0 is a D0L-sequence if it has a matrix representation (L; M;C) such that
all components of L are either 0 or 1 and such that all components of C are
equal to 1. We have then the following result [17, Lem III.7.4].
Every N-rational sequence (u n ) n0 can be decomposed into D0L-
sequences. This means that there are two integers K and p such that each
sequence (u k+np ) n0 for k K is a D0L-sequence.
We nally come to the proof of Theorem 8.
Proof We suppose that un satises dn 1 for n l and that the
sequence (d n )n l is N-rational. By Lemma 11, it may be assumed without
loss of generality that l = 0. By Theorem 12, the sequence (v n ) n0 dened by
that sequence can be decomposed into
D0L-sequences. There are two two integers K and p such that each sequence
(v k+np ) n0 for k K is a D0L-sequence. By Lemma 11, it may be assumed
again without loss of generality that and that u
the sequence v Each sequence v k has a matrix
representation such that each component of L k are either 0 or 1
and such that all components of C k are equal to 1. Let d k be the dimension of
that representation. Dene the alphabet A by
Dene the morphism from A into itself by
a 7! ac L0;1
c k;i 7! c Mk;i;1
It can be easily proved by induction on n that
Y
where each w k;i is a word on the alphabet fc k;j j 1 j d k g of length v k;i .
Actually the number occurrences of the letter c k;j in the word w k;i is the j-th
component of the vector L k M i
k . Dene then the morphism from A into B
by
As a complement to Corollary 9 (that predicates fQ(n)k n j n 2 Ng are
morphic), we formulate a necessary condition for a predicate P to be morphic.
It turns out that the factorial predicate fn! j n 2 Ng does not fall under this
condition, which shows that it is not morphic. In the subsequent section, we
shall develop a framework where such predicates can be handled together with
the morphic ones, thus unifying the contraction method of Elgot and Rabin
with the results above.
Proposition 14 Let (un ) n0 be a strictly increasing sequence of integers. If
the predicate fun
integer k.
Proof Suppose that the characteristic word of the predicate fun j n 2 Ng is
equal to ( ! (a)) where and are respectively morphisms from A into itself
and from A into B . Let x be the xed point ! (a) and let u be the nite word
such that (a) = au. The innite word x can be then factorized
Let k be the integer dened by (b)j. It can be easily shown by
induction on n that j n (b)j k n for any letter b. Let B the set of letters b
such that (b) contains at least one 1, that is
g. We claim
that if for a xed letter b, there is an integer n such that n (b) contains a letter
of B, the smallest integer satisfying this property is smaller than the cardinality
of A. This follows from the fact that a letter c appears in n (b) i there is a
sequence of letters such that b appears in (b i )
for 0 i < n. For any integer n, the word n (a)
contains therefore a letter of B and the result follows easily.
Residually ultimately periodic predicates
In this section, we develop a framework which merges the contraction method
of Elgot and Rabin with the semigroup approach used for morphic predicates.
The common generalization of the two approaches is captured by the notion of
residually ultimately periodic innite words. We introduce these words and we
show that morphic words belong to this class. The application to the contraction
method is developed in the subsequent section.
Definition 15 A sequence (un ) n0 of words over an alphabet A is said to
be residually ultimately periodic if for any morphism from A into a nite
semigroup S, the sequence (un ) is ultimately periodic.
This property is said to be eective i for any morphism from A into a
nite semigroup, two integers n and p such that for any k n,
can be eectively computed. An innite word x is called residually ultimately
periodic if it can be factorized where the sequence (u n ) n0 is
eectively residually ultimately periodic. The following proposition shows how
this property can be used for deciding the problem (Acc x
Proposition 16 If the innite word x is residually ultimately periodic, the
problem (Acc x ) is decidable.
Proof Let A = (Q; E; I ; F ) be a Buchi automaton. Dene the equivalence
relation over A by
The relation is a congruence of nite index. The application which maps any
nite word to its class is therefore a morphism from A into the nite semigroup
A =.
This congruence has the following main property. Suppose that the two
innite words x and x 0 can be factorized
such that u k u 0
k for any k 0. Then x is accepted by A i x 0 is accepted
by A.
Since the sequence (u n ) n0 is residually ultimately periodic, there are two
integers n and p such that for any k greater than n, Therefore, x
is accepted by A i the word v
1 is accepted by A where v
We illustrate this notion by the following example.
Example 17 The sequence of words (a n! ) n0 is residually ultimately periodic.
This sequence is actually residually ultimately constant. It is indeed well-known
that for any element s of a nite semigroup S and any integer n greater than
the cardinality of S, s n! is equal to a xed element usually denoted s ! in the
literature [2, p. 72].
The sequences of words which are residually ultimately constant have been
considerably studied. They are called implicit operations in the literature [2]. A
slight variant of the previous example shows that the sequence (u n ) n0 dened
by un residually ultimately constant. Since (n
equal to nn! 1, the word u 0 u 1 is the characteristic word of the factorial
predicate Ng. The monadic theory of hN; <; P i where P is the
factorial predicate is therefore decidable by the previous proposition.
In the following propositions we connect the residually ultimately periodic
sequences to the innite words obtained by iterating morphisms.
Proposition be a morphism from A into itself and let u be a word
over A. The sequence un = n (u) is residually ultimately periodic, and this
property is eective.
Proof Let be a morphism from A into a nite semigroup S. Since S is
nite, there are nitely many morphisms from A into S. Therefore, there are
two integers n and p such that This implies that for any k
greater that n, one has . Note that the
two integers n and p can be eectively computed. It suces to nd n and p
such that ( n any letter a in A.
It follows from the proposition that a morphic word has a factorization whose
factors form a residually ultimately periodic sequence. Let x be a morphic word
are morphisms and let u be the nite word such that
au. The word x can be factorized
and un = ( n (u)) for n 1. By the proposition, the sequence ( n (u)) n0 is
residually ultimately periodic. The sequence (u n ) n0 is therefore also residually
ultimately periodic.
5 The predicate class K
The decidability of the decision problem (Acc x ) for an innite word x involves a
\good" factorization of the word x. In the case morphic words, this factorization
is naturally provided by the generation of x via a morphism. Another approach
is to consider the canonical factorization of the word x induced by the blocks
which form the word x (representing the distances between the successive
elements of the predicate). The contraction method as developed by Elgot and
Rabin [11] and Siefkes [18] reduces the sequence of these nite words from 0 1
to an ultimately periodic sequence. In this section we embed this method into
the framework developed above.
If strictly increasing sequence of integers, the characteristic
word xP of the predicate can be canonically factorized
un is the word 0 kn+1 kn 1 1 over the alphabet B . If this
sequence of words is residually ultimately periodic and if furthermore
this property is eective, it is decidable whether the word xP is accepted by a
Buchi automaton and the monadic theory of hN; <; P i is therefore decidable.
We will prove that the class of sequences such that this property holds
contains interesting sequences like (n k ) and that it is
also closed under several natural operations like sum, product and exponentiation
The following lemma essentially states that it suces to consider the sequence
un = a kn+1 kn over a one-letter alphabet.
Lemma 19 Let (un ) n0 be a sequence of words over A and let a be a letter.
The sequence (un ) n0 is residually ultimately periodic i the sequence (u n a) n0
is residually ultimately periodic. Moreover this property is eective for
i it is eective for (un a) n0 .
Proof It is clear that if the sequence (u n ) n0 is residually ultimately periodic,
then the sequence (un a) n0 is also residually ultimately periodic. Indeed, for
any morphism from A into nite semigroup, the relation
implies that if the sequence (un ) n0 is ultimately periodic, then the sequence
(una) is also ultimately periodic.
Conversely let be a morphism from A into a nite semigroup S. Let ^
S be
the semigroup S 1 S with the product dened by (s; t)(s
be the morphism from A into ^
S dened by It may be easily
veried by induction on the length on the word w that
Therefore, if the sequence (un a) n0 is residually ultimately periodic, the sequences
(una) and (un ) are ultimately periodic. The sequence (u n ) n0 is
then residually ultimately periodic
If A is the one-letter alphabet fag, the semigroup A is isomorphic to the
set N of integers by identifying any word a n with the integer n. Therefore, a
sequence of integers is said to be residually ultimately periodic i the
sequence a kn is residually ultimately periodic.
K be the class of increasing sequences (k n ) n0 of integers
such that the sequence (k n+1 kn ) n0 is residually ultimately periodic.
By Lemma 19 and by Proposition 16, we conclude:
Theorem be in K and let xP be the characteristic word of the
predicate Ng. Then the decision problem (Acc xP ) is decidable
and the monadic theory MThhN; <; P i is also decidable.
We give two comments, the rst one on the relation between the class K and
residually ultimately periodicic sequences, the second one on a similar class of
predicates introduced by Siefkes [18].
As stated in Corollary 28 below, each sequence in K is residually ultimately
periodic. The converse is not true as the following example shows. Let
be an innite word over the alphabet B , that is b Consider the
sequence dened by
)!. If the word x has innitely many
occurrences of 1, the sequence (k n ) n0 is then residually ultimately periodic (cf.
Example 17). However, if the word x is not ultimately periodic, the sequence
residually ultimately periodic.
In [18], Siefkes studies predicates generated by sequences
two conditions: to be \eectively ultimately reducible" (which corresponds
to being eectively residually ultimately periodic), and to have an essentially
increasing sequence of dierences kn+1 kn , i.e., for each d 0, we have
eectively computable from d). The second
assumption ensures that the sequence (k n+1 kn ) n0 is residually ultimately
periodic in our sense. Although most natural predicates of the class K can also
be treated by Siefkes' approach, there are some exceptions. For instance, let
dened by is even and by
is odd. This sequence (k n ) n0 is in the class K but Siefkes' second assumption
does not apply.
The following theorem shows that the class K contains interesting sequences
and that it is closed under several natural operations.
Theorem 22 Any sequence (k n ) n0 such that the sequence (k n+1 kn ) n0 is
N-rational belongs to K. If the sequences belong to K, the
following sequences also belong to K:
(sum and product) kn
(exponentiation) k l n for a xed integer k and k l n
(generalized sum and product)
By Lemma 10, the class K contains any sequence of the form k n Q(n) where
k is a positive integer and Q is a polynomial such that Q(n) is integer for any
integer n. By applying the generalized product to the sequences
the sequence (n!) n0 belongs to K.
The closure by dierences shows that K contains any rational sequence
of integers such that lim n!1 (k n+1 kn rational
sequence of integers is the dierence of two N-rational sequences [17, Cor. II.8.2].
The class K is also closed by other operations. For instance, it can be proved
that if both sequences belong to K, then the sequence
dened by
belongs to K.
The class K is closed under sum, dierence product and exponentiation but
the following example shows that it not closed under quotient.
Example 23 Consider the sequence
This sequence is not
residually ultimately periodic since the sequence (k n mod 4) is not ultimately
periodic. It turns out that (k n mod p) is not ultimately periodic unless
[1]. It can be easily seen that the greatest integer l such that 2 l divides kn is
the number of 1 in the binary expansion n. Therefore, (k n mod 4) is equal to 2
if n is a power of 2 and to 0 otherwise.
For two integers t and p, dene the equivalence relation t;p on N as follows.
For any integers k and k 0 , one has
The integers t and p are respectively called the threshold and the period of
the relation t;p . Note that the relation k t;p k 0 always implies that
mod p. The equivalence relation t;p is of nite index and it is compatible with
sums and products. Indeed if k t;p k 0 and l t;p l 0 hold then both relations
relations t;p for t and p capture
the property of being residually ultimately periodic for sequences of integers.
Lemma 24 A sequence (k n ) n0 of integers is residually ultimately periodic i
for any integers t and p there are two integers t 0 and p 0 such that for any n
greater than t 0 , one has kn t;p kn+p 0 .
Proof Indeed, this condition is sucient since for any element s of a nite
semigroup, there are two integers t and p such that s . Conversely, this
condition is also necessary. The set N= t;p equipped with addition is a nite
semigroup and the canonical projection from N to N= t;p is a morphism.
The following result is almost trivial but it will often be be used.
Lemma
l be l relations associated with xed integers
residually ultimately periodic
sequences of integers. There are two integers r and q such that k j;n t i ;p i
for any 1 i l, any 1 j m and any n greater than r.
Proof Since each sequence (k i;n ) n0 is residually ultimately periodic, there are
two integer r i;j and q i;j such that k j;n t i ;p i
for any n greater than t i;j .
The two integers r and q dened by
the required property.
The following lemma states that the class of residually ultimately periodic
sequences of integers is closed under sum and product. These results follow
from a more general result which states the class of residually ultimately periodic
sequences of words are closed under substitution. More precisely, if the sequence
of morphisms from A into B is such that each sequence ( n (a)) n0
is residually ultimately periodic and if the sequence (u n ) n0 of words over A
is also residually ultimately periodic, then the sequence ( n (un
residually ultimately periodic. If each word un is for instance equal to the xed
word ab and if the morphism n maps a to vn and b to wn , the word n (u)
is equal to vn wn . The class of residually ultimately periodic sequences of words
are closed under concatenation. If un is equal to a kn and if n (a) is equal to
a l n , then n (un ) is equal to a kn l n . We give here a direct proof of these results
which relies on the compatibility of any relation t;p with sums and products.
Lemma 26 Let (k n ) n0 and (l n ) n0 be two residually ultimately periodic sequences
of integers. Both sequences are also residually
ultimately periodic. If lim n!1 (k n l n then the sequence (k n l n ) n0
is also residually ultimately periodic.
The following example shows that the assumption on the limit of the sequence
kn l n is really necessary. Let be an innite word
over the alphabet B . Consider the sequences (k n ) n0 and (l n ) n0 dened by
These two sequences are obviously residually
ultimately periodic (cf. Example 17). However, the dierence kn l n is equal
to n!. If the word x is not ultimately periodic, the sequence (k n l n ) n0
is not residually ultimately periodic.
Proof By Lemma 25, there are then r and q such that kn t;p kn+q and
l n t;p l n+q for any n greater than r. This yields kn
and kn l n t;p kn+q l n+q for n greater than r. The sequences
are residually ultimately periodic.
The relations kn t;p kn+q and l n t;p l n+q imply that
and l This yields kn l
the dierence kn l n is greater than t for all n greater
than some r 0 . Then for any n greater than r and r 0 , one has kn l n t;p
kn+q l n+q for n greater than r 0 and the sequence (k n l n ) n0 is residually
ultimately periodic.
The following lemma states the class of residually ultimately periodic sequences
of integers is closed under generalized sum and product when l
It will be used to prove the general case.
Lemma be a residually ultimately periodic sequence of inte-
gers. The sequences (Kn ) n0 and (Ln ) n0 dened by
are also residually ultimately periodic.
Proof Let t;p be the relation associated with two xed integers t and p. There
are then two integers r and q such that kn t;p kn+q for any n grater than r.
Let k be the sum
. Note that
integer l.
There are then two integers r 0 and q 0 such that r 0 k t;p (r
that , Kn t;p Kn+qq 0
holds for any n greater than r Indeed, one has
r
t;p
r
t;p
r
t;p
r
The proof for Ln is very similar. It suces to replace each sum by a product.
For instance, the constant k is dened by
and the two integers
r 0 and q 0 are chosen such that k r 0
.
The previous lemma has the following corollary.
Corollary 28 Any sequence (k n ) n0 in K is residually ultimately periodic.
The following lemma is needed to prove that the class K is closed under
generalized sum and product.
residually ultimately periodic
sequences of integers. Both sequences (Kn ) n0 and (Ln ) n0 dened by
are then residually ultimately periodic.
Proof Let t;p be the relation associated with two xed integers t and p. There
are then two integers r and q such that kn t;p kn+q for any n grater than r. Let
k be the sum
. Note that
integer n greater
than r. There are then two integers r 0 and q 0 such that r 0 k t;p (r
Lemma 25, there are also two integers r 00 and q 00 such that l n r;q l n+q 00
and
dn r+r 0 q;qq 0 dn+q 00 for any n greater than r 00 . We claim that Kn t;p Kn+q 00
for any n greater than r 00 .
We rst claim that Kn+q 00 t;p
the result obviously
holds. Otherwise, one has l n r and l n+q 00 r. Since l
has k i t;p k i+l where l = l n+q 00 l n for any l n i. This proves the claim.
We now prove that
i=ln t;p Kn . If the result holds
obviously. Otherwise, one has dn r
we may assume that dn+q 00 > dn . Since dn+q
suppose that dn+q 00
l. One has then
t;p
l n+dn+lqq 0
t;p
l n+dn r 0 q
t;p
l n+dn r 0 q
t;p
l n+dn X
The proof for Ln is similar. It suces to replace each sum by a product.
We nally come to the proof of theorem 22.
Proof We prove that any sequence (k n ) n0 such that the sequence (k n+1
kn ) n0 is N-rational belongs to K. This follows from Theorem 8 and from
Proposition but we also provide a direct proof. If the sequence (d n ) n0
is dened by is N-rational, there is a matrix representation
(L; M;C) such that the relation t;p to
matrices by setting M t;p M 0 i the relation M k;l t;p M 0
k;l holds for any
(k; l)-entry of the matrices. There are then two integers r and q such that
M r t;p M r+q and this implies M n t;p M n+q for n greater than r. Thus, one
has dn t;p dn+q for n greater than r.
If both sequences belong to to K, Lemma 26 applied
to the sequences (k n+1 kn ) n0 and (l n+1 l n ) n0 shows that the sequence
belongs then to K. If furthermore, the assumption on the limit is
fullled, it also shows that the sequence (k n l n ) n0 belongs then to K.
The dierence kn+1 l n+1 kn l n is equal to kn+1 (l n+1 l n )+(kn+1 kn )l n . By
Lemma 27, the sequences (k n ) n0 and (l n ) n0 are residually ultimately periodic.
By Lemma 26, the sequence of dierences is then residually ultimately periodic
and the sequence (k n l n ) n0 belongs then to K.
The dierence k l n+1 k l n is equal to k l n (k l n+1 l n 1). By lemma 27, both
sequences k l n and k l n+1 l n are residually ultimately periodic. By Lemma 26, the
sequence of dierences is then residually ultimately periodic and the sequence
belongs then to K.
Let Kn be the sum
. The dierence Kn+1 Kn is equal to the sum
. By Lemma 27, the sequence (l n ) n0 is residually ultimately periodic
and by Lemma 29, the sequence of dierences is then residually ultimately
periodic.
Let Kn be the product
. The dierence Kn+1 Kn is equal to
l n
Y
l n+1
Y
By Lemma 29, the sequence (Ln ) n0 dened by
residually
ultimately periodic. By Lemma 27 applied to the sequence (Ln ) n0 , the sequence
dened by L 0
residually ultimately periodic.
By Lemma 26, the sequence of dierences is then residually ultimately periodic
and the sequence (Kn ) n0 belongs then to K.
Let dn be the dierence kn+1 kn , let d 0 n be the dierence k l n+1
n . We
assume that the sequence (k n ) n0 is strictly increasing and we also assume that
l n 2 for n greater than some constant which can be eectively computed.
We prove that the sequence (d 0
residually ultimately periodic. Let t;p
be the relation associated with two xed integers t and p. We rst claim that
greater than t. On has indeed the following inequalities
Since the sequence (k n ) n0 is assumed to be strictly increasing, dn is non-zero
and kn is greater than t for any n greater than t.
By Lemma 25, there are two integers s and m such that k n t;p k n+m for
any integer k and any integer n greater than s. Note that if n s and if
k t;p l, then k n t;p l n t;p l n+m . By Lemma 25, there are two integers r
and q such that kn t;p kn+q , dn t;p dn+q 0
, l n s;m l n+q for any integer n
greater than r. We claim that d 0
for any integer n greater than r.
n+q t, it suce to prove that d 0
n+q mod p. The
relations kn t;p kn+q and l n s;m l n+q 0
imply k l n
n+q . The relation
kn+1 t;p kn+q+1 and l n+1 r;q l n+q+1 imply k l n+1
n+q+1 . This two
relations then imply d 0
n+q mod p.
Conclusion
We have introduced a large class of unary predicates P over N such that the
corresponding Buchi acceptance problem Acc xP (and hence the monadic theory
decidable. The class contains all morphic predicates (which
solves a problem of Maes [14, 15]). The connection to the work by Elgot and
Rabin [11] and Siefkes [18] was established by extending the class of morphic
predicates to the class of the residually ultimately periodic predicates. Finally,
strong closure properties (under sum, product, and exponentiation) were shown
for certain residually ultimately periodic predicates (where the sequence of differences
of successive elements is residually ultimately periodic). Altogether we
obtain a large collection of concrete examples P such that MThhN; <; P i is de-
cidable, containing the k-th powers and the k-powers for each k, the value sets of
polynomials over the integers, the factorial predicate, the Fibonacci predicate,
as well as the predicates derived from morphic words like the Thue-Morse word
and the Fibonacci word.
Let us mention some open problems.
Our results do not cover expansions of hN; <i by tuples predicates
rather than by single predicates. In his dissertation, Hosch [13] has solved
the problem for the special case of the predicates P
Ng. We do
not know whether MThhN; <; decidable if the P i are just known
to be residually ultimately periodic.
There should be more predicates P for which the Buchi acceptance problem
Acc xP and hence the theory MThhN; <; P i is decidable. A possible next step
is to consider Sturmian words, a natural generalization of morphic words (see
[7]).
Finally, we should recall the intriguing question already asked by Buchi and
Landweber in [9] (\Problem 1"): Is there an \interesting" recursive predicate P
such that MThhN; <; P i is undecidable? How about P being the prime number
predicate?
Acknowledgement
The authors would like to thank Jorge Almeida, Jean-Paul Allouche, Jean Berstel
and Jacques Desarmenien for very interesting suggestions and comments.
--R
Linear cellular automata
Finite Semigroups and Universal Algebra.
Decidability and undecidibility of theories of with a predicate for the primes.
Axel Thue's work on repetitions in words.
Rational Series and Their Languages.
The monadic theory of morphic in
Decidability and undecidibility of extensions of second
Concrete Mathe- matics
Decision Problems in B
Decidability of the
An automata theoretic decidability proof for the
Open questions around B
Decidable extensions of monadic second order successor arith- metic
The theory of successor with an extra predicate.
On the bounded monadic theory of well-ordered structures
Automata on in
Ehrenfeucht games
--TR
Rational series and their languages
Automata on infinite objects
Linear cellular automata, finite automata and Pascal''s triangle
Open questions around BuMYAMPERSANDuml;chi and Presburger arithmetics
An automota theoretic decidability proof for first-order theory of <inline-equation> <f> <fen lp="ang"><blkbd>N</blkbd>,MYAMPERSANDlt;,P<rp post="ang"></fen></f> </inline-equation> with morphic predicate P
Concrete Mathematics
Automata
The Monadic Theory of Morphic Infinite Words and Generalizations
Ehrenfeucht Games, the Composition Method, and the Monadic Theory of Ordinal Words
Decision problems in buechi''s sequential calculus
--CTR
Alexander Rabinovich, On decidability of monadic logic of order over the naturals extended by monadic predicates, Information and Computation, v.205 n.6, p.870-889, June, 2007
Jean Berstel , Luc Boasson , Olivier Carton , Bruno Petazzoni , Jean-Eric Pin, Operations preserving regular languages, Theoretical Computer Science, v.354 n.3, p.405-420, 4 April 2006
Erich Grdel , Wolfgang Thomas , Thomas Wilke, Literature, Automata logics, and infinite games: a guide to current research, Springer-Verlag New York, Inc., New York, NY, 2002 | morphic predicates;second-order |
637371 | New Interval Analysis Support Functions Using Gradient Information in a Global Minimization Algorithm. | The performance of interval analysis branch-and-bound global optimization algorithms strongly depends on the efficiency of selection, bounding, elimination, division, and termination rules used in their implementation. All the information obtained during the search process has to be taken into account in order to increase algorithm efficiency, mainly when this information can be obtained and elaborated without additional cost (in comparison with traditional approaches). In this paper a new way to calculate interval analysis support functions for multiextremal univariate functions is presented. The new support functions are based on obtaining the same kind of information used in interval analysis global optimization algorithms. The new support functions enable us to develop more powerful bounding, selection, and rejection criteria and, as a consequence, to significantly accelerate the search. Numerical comparisons made on a wide set of multiextremal test functions have shown that on average the new algorithm works almost two times faster than a traditional interval analysis global optimization method. | Introduction
. In this paper the problem of nding the global minimum f of
a real valued one-dimensional continuously dierentiable function f
and the corresponding set S of global minimizers is considered, i.e.:
In contrast to one-dimensional local optimization problems which were very well studied
in the past, the univariate global optimization problems are in the area of interest
of many researchers now (see, for example, [1, 6, 10, 9, 11, 14, 16, 15, 17, 21, 28, 30,
32]). Such an interest is explained by existence of a large number of applications where
it is necessary to solve this kind of problems (see [2, 10, 12, 29, 30, 31]). On the other
hand, numerous approaches (see, for example, [5, 11, 13, 18, 19, 22, 23, 26, 31]) enable
to generalize to the multidimensional case methods developed to solve univariate
problems.
In those cases where the objective function f(x) is given by a formula, it is possible
to use an Interval Analysis Branch-and-Bound approach to solve the problem (1.1)
(see [13, 19, 20, 23]). A general global optimization algorithm based on this approach
is shown in Algorithm 1.
The algorithm selects the next interval to be processed (selection rule, line 3),
which can be totally or partially rejected when it is guaranteed that it does not
contain any global minimizer, (elimination rule, line 4). This elimination process is
carried out by using information obtained from the inclusion functions which return
y E-mail: [email protected]
z E-mail: [email protected]
x E-mail: [email protected]
{ Department of Computer Architecture and Electronics, University of Almera, Spain. This work
was supported by the Ministerio de Educacion y Cultura of Spain (CICYT TIC99-0361).
k E-mail: [email protected]. ISI-CNR c/o DEIS, Universita della Calabria, 87036 Rende (CS)
Italy, and University of Nizhni Novgorod, Nizhni Novgorod, Russia.
L.G. CASADO, I. GARCIA, J.A. MARTINEZ AND YA.D. SERGEYEV
general interval Branch-and-Bound global optimization algorithm.
Funct IGO(S; f)
1. Set the working list L := fSg and the nal list Q := fg
2. while ( L 6= fg )
3. Select an interval X from L
4. if X cannot be eliminated
5. Divide X generating
the termination criterion
7. Store X i in Q
8. else
9. Store X i in L
10. return Q
an enclosure of the real range of f(x) (and in some cases of f 0 (x) and f 00 (x)) on X
(bounding rule). If the interval cannot be rejected, it is subdivided (division rule, line
5). When the generated subintervals are informative enough, they are stored in the
nal list (termination rule, line 7). Otherwise, they are stored in the working list for
further processing (line 9). The algorithm nishes when there are no intervals to be
processed (line 2) and returns the set of intervals with valuable information (line 10).
An overview on theory and history of these rules can be found, for example, in [13].
Of course, every concrete realization of Algorithm 1 depends on the available
information about the objective function f(x). In this paper it is supposed that
inclusion functions can be evaluated for f(x) and its rst derivative f 0 (x) on X .
Thus, the information about the objective function which can be obtained during the
search is the following:
where
dened by its lower and upper bound.
inclusion function of f(x) at X obtained by
Interval Arithmetic. For the real range of f(x) over X the following inclusion
inclusion of the function f(x) at a point x 2 X .
inclusion function of f 0 (x) over X .
When information (1.2) is available, the rules of a traditional realization of Algorithm
1 can be written more precise. Below we present a Traditional Interval
analysis global minimization Algorithm with Monotonicity test (TIAM) which is used
frequently for solving the problem (1.1) using the information (1.2) (see [13]).
Selection rule: Among all the intervals X i stored in the working list L select an
interval X such that
Bounding rule: The fundamental Theorem of Interval Arithmetic provides a natural
and rigorous way to compute an inclusion function. In the present study
the inclusion function F of the objective function f is available by Extended
Interval Arithmetic (see [7, 13]).
Elimination rule: The common elimination rules are the following:
Midpoint test: An interval X is rejected when F (X) > f~, where f~ is the
best known upper bound of f . The value of f~ is updated by evaluating
F (m(X)) in selected intervals, where is the midpoint
of the interval X .
Cut-o test: When f~ is improved, all intervals X stored in the working
and nal lists satisfying the condition F (X) > f~ are rejected.
Monotonicity test: If for an interval X the condition
then this means that the interval X does not contain any minimum and,
therefore, can be rejected.
Division rule: Usually two subintervals are generated using m(X) as the subdivision
point (bisection).
Termination rule: A parameter determines the desired accuracy of the problem
solution. Therefore, intervals X that have a width w(X) less than , i.e.,
are moved to the nal list Q. Other termination
criteria can be found in [23].
As can be seen from the above description, the algorithm evaluates lower bounds
for f(x) over every interval separately, without taking into consideration information
which can be obtained from other intervals. F 0 (X) is used only during the Monotonicity
test and is not connected with the information F (m(X)) and F (X). Only F (X)
is used in order to obtain a lower bound for f(x) over X , all the rest of the search
information is not used for this goal. The only exchange of information between the
intervals is done through f~.
In Lipschitz global optimization there exist algorithms for solving problem (1.1)
evaluating lower bounds by constructing support functions (in fact, F (X) can be
viewed as a special support function { constant { for f(x) over X) for the objective
function too (see, for example, [6, 10, 11, 17, 18, 21, 22, 27, 28, 30, 31]). They work
in a way similar to the TIAM support functions are built and successively
improved in order to obtain a better lower bound for the global minimum. Of course,
these support functions are completely dierent and are built on the basis of dierent
ideas. An interesting aspect of the support function concept in the context of this
paper is the use of the search information. When a support function is built for an
interval [a; b], the information regarding neighbors [c; a] and [b; d] is also used in order
to construct a better support function and to obtain a better lower bound for f .
In this paper, a new Interval analysis global minimization Algorithm using Gradient
information (IAG) is proposed for solving problem (1.1). It uses the same information
(1.2) as TIAM but, due to a more e-cient usage of the search information,
constructs support functions which are closer to the objective function and enables
to obtain better lower bounds. As it will be shown hereinafter, the new method IAG
has a quite promising performance in comparison with the traditional TIAM.
The rest of the paper is structured as follows. In Section 2 some theoretical results
explaining construction of the support functions and lower bounds are presented.
The algorithm IAG is described in Section 3. Numerical experiments comparing
performance of TIAM and IAG are presented in Section 4. Finally, Section 5 concludes
the paper.
2. New support functions based on interval evaluations of the objective
function and its rst derivative. In order to proceed with the description of
the new algorithm, theoretical results are presented to explain how the new support
functions and the corresponding lower bounds are constructed in IAG. We start with
the following lemma illustrated in Figure 2.1, in a similar way as was done for non-
4 L.G. CASADO, I. GARCIA, J.A. MARTINEZ AND YA.D. SERGEYEV
__
__
__
__
F(c)
f ~
Fig. 2.1. Graphical example of Lemma 2.1 and Theorem 2.2
dierentiable functions in both, Lipschitz optimization (see [9, 11, 18, 21, 22, 28, 30,
31]) and approaches based on evaluation of slopes as presented in [24].
Lemma 2.1. Given a continuously dierentiable function f
a closed interval in R, an interval X S, an enclosure F (c) of f(c), c 2 X, and an
enclosure then the following bounds hold for f(x); x 2 X:
Proof. It follows from Mean Value Theorem that there exists a point 2 [x; c]
such that
By extending the equation (2.2) to intervals the following inclusion
is obtained. Let us take a generic point x 2 X . In dependence on the mutual
disposition of the points c and x in X , three results can be deduced from (2.3):
Lemma is proved.
This Lemma gives us a possibility to construct a new interval analysis support
function for f(x). It can be seen from Figure 2.1 that it is similar to that ones built
in Lipschitz global optimization (see, for example, [10, 21, 22, 27, 31]). The Lipschitz
support functions are piece-wise linear. The slope of each linear piece is L or L,
where L is the Lipschitz constant. In our approach, for every interval X the slopes
of support functions are equal to F 0 (X) for all x c and to F 0 (X) for all x c (see
Figure
2.1).
The following results are the basics for the new support functions and explain
how the new lower bounds for f are evaluated.
Theorem 2.2. Given closed intervals X;S such that X S R and a continuously
dierentiable function f R. Let for a point c 2 X a lower bound lb(c) of
f(c) be determined and an enclosure F 0 (X) of f 0 (X) be obtained. For a given current
upper bound f~ of f , there exists a set V X where all the global minimizers of X,
if any, are included.
Proof. For a minimizer point x 2 S it applies that f(x ) f~. Combining with
Lemma 2.1 a minimizer x 2 X \ S has to fulll:
and therefore can only be located in set:
f~
From Theorem 2.2 it can be derived that if f~ < lb(c) then
can be constructed as:
As an example, depicted in Figure 2.1 for the case F 0 (X) > 0
Notice that V can be obtained in a similar way by applying the interval Newton
operator to nd the roots of the function f(x) f~ on X , under the Theorem conditions
[8, 19, 20].
Theorem 2.3. Let us consider a continuously dierentiable function f
where S is a closed interval in R and intervals X; Y such that X Y S. If:
6 L.G. CASADO, I. GARCIA, J.A. MARTINEZ AND YA.D. SERGEYEV
f ~
__
G
__
G
__
__
Fig. 2.2. Interval V is the region which can contain global minimizers. The set XnV does not
contain any global minimizer.
1. lower bounds lb(x) and lb(x) of, respectively, f(x) and f(x), have been evaluated
2. a current upper bound f~ of f is such that
f~ minflb(x); lb(x)g;
3. bounds
Then only the interval
G
can contain global minimizers and a lower bound z(X; lb(x); lb(x); G) of f(x) over the
interval X can be calculated as follows:
(see Figure 2.2).
Proof. By applying Theorem 2.2 with x the interval V 2 from (2.7) is obtained.
The same operation with the point us the interval V 1 from (2.6). Then,
the interval V from (2.8) is obtained as
Let us prove now the formula (2.9). Since X Y , F
applying the Mean Value Theorem, we have
and
From these two inequalities follows
so
and
which proves the theorem.
Corollary 2.4. If for an interval X the inequality z(X; lb(x); lb(x); G) > f~ is
fullled then it can be derived that X does not contain any global minimizer.
Proof. Proof is evident and so it is omitted.
Let us return now to the problem (1.1). We can use the information (1.2) during
the global search. Thus, by using F (X) together with the function d(x) from (2.10)
we can build a new support function D(x) for f(x) over each interval
The corresponding new lower bound F z(X) for f(x) over the interval X is calculated
in the following way
Essential in the use of the algorithm is that for obtained from set
X according to (2.8) the current value of f~ is a lower bound of f at v and v; i.e.
f~ f(v) and f~ f(v), so are easily available bounds.
3. Description of the new algorithm. On the basis of theoretical results
presented in the previous section we can determine new rules for Algorithm 1 in
order to introduce the new Interval analysis global minimization Algorithm using
Gradient information (IAG) described in Algorithm 2:
Selection rule: Select an interval X such that
where L is the working list ordered by non-decreasing values of F z(X i ) as
the rst ordering criterion and non-increasing order with respect to the age of
the intervals as the second ordering criterion. Therefore, the selected interval
will always be at the head of the working list.
Bounding rule: The lower bound F z(X) from (2.12) is used.
Elimination rule: Four elimination rules are used:
Monotonicity test: If for an interval X the condition
then this means that the interval X does not contain any minimum and,
therefore, can be rejected.
RangeUp test: An interval X is rejected if f~ < F z(X).
Gradient test: The subregion fXnV g, where V is dened by (2.8), is
rejected. Of course, when the whole interval X can be eliminated.
Cut-o test: When f~ is improved, all intervals X stored in the working
and nal lists for which the condition F z(X) > f~ is fullled are rejected.
Note, that this Cut-o test is dierent from the Cut-o test of TIAM
where the condition F (X) > f~ was used.
Division rule: Suppose that an interval X has been obtained as a result of application
of RangeUp and Gradient tests to an interval Y and then stored
in the working list L. If the interval X is chosen for subdivision, the point
m(X) is used as the subdivision point. Note, that in general m(X) 6= m(Y )
and therefore this division rule does not coincide with the division rule of the
TIAM algorithm.
8 L.G. CASADO, I. GARCIA, J.A. MARTINEZ AND YA.D. SERGEYEV
Algorithm 2 Interval analysis global minimization Algorithm using Gradient information
Funct
1.
2. if ( F
3. x~ := s; f~ := F
4. else
5. x~ := s; f~ := F
Test
7. return (f
8.
9. X := GradTest (
10. lb(x) := lb(x) := f^(X) := f~
11. F z(X) := maxfF (X); z(X; lb(x); lb(x); F 0 (S))g Lower bound of f(X)
12. if ( w(X) )
13. Save fX; f^(X); F z(X)g in Q
14. else
15. Save fX; f^(X); F z(X)g in L
16. while ( L 6= fg )
17. fX; f^(X); F z(X)g := Head(L)
18. comment X :=
19. if (0 2 F 0 (X)) Monotonicity Test
20. if
21. f~ := F (m(X))
22. CutOTest ( f~;
25.
26. for i := 1,2
GradTest
28.
29. if ( w(X fully rejected
test
33. Save fX
34. else
35. Save fX
36. return Q
Every element X i in the working and nal lists, L and Q, respectively, is a structure
with the following data:
Bounds x i and x i of the interval X i .
The value being the value of f~ at the moment of creation of the
The lower bound F z(X i ).
Let us comment Algorithm 2. The IAG algorithm starts by evaluating F (s) and
F (line 2) and initializing x~ and 5). If the monotonicity
SUPPORT FUNCTIONS USING GRADIENT INFORMATION 9
Algorithm 3 Gradient test
Funct GradTest(X; lb(x); lb(x); f~; G)
1. if ( lb(x) > f~ )
2.
3.
4. if ( w(X) > 0 and lb(x) > f~ )
5.
7. return X
__
_
_
__
__
__
__
f ~
_
f ~
__
__
__ __
Fig. 3.1. The upper graph is an example of the initial phase of IAG. Bottom graphs show
an example of how the intervals X1 and X2 are built from X. The bottom left hand graph shows
the case F (m(X)) > f~ and the bottom right hand f~ < minflb(x); lb(x)g. Only shaded areas can
contain f .
test is satised (line 6), the algorithm nishes and the solution is given by ff~; x~g.
Otherwise, in order to apply the Gradient test, lb(s) and lb(s) are initialized in line 8.
The Gradient test is implemented in the GradTest procedure presented in Algorithm
3 which is applied to S on line 9. The GradTest procedure applies Theorems 2.2 and
2.3 to S, using returns an interval X S, such that the set
of global minimizers of S are also in X . Lower bounds of f(x) and f(x) (lb(x) and
lb(x), respectively) and f^(X) are set to f~ and F z(X) is computed (lines 10 and 11).
The interval X is stored in the nal list Q (line 13) or in the working list L (line 15),
depending on the value of w(X). Graphically, this initialization stage is shown in the
top graph of Figure 3.1.
L.G. CASADO, I. GARCIA, J.A. MARTINEZ AND YA.D. SERGEYEV
After this initialization stage and while the list L is not empty (line 16), IAG
will select the interval at the head of L (line 17) for further processing. If F 0 (X)
does not satisfy the Monotonicity test (line 19), F (m(X)) is evaluated (line 20). If
F (m(X)) is a better upper bound of f than f~, f~ is updated to F (m(X)) (line
21) and the Cut-o test is applied to the intervals in the working and nal lists (line
22). Then, the interval X is subdivided into two subintervals X 1 and X 2 (line 23).
These subintervals inherit from X the lower bound of f(x) in one of their bounds (line
24) and the other shared bound is set to F (m(X)) (line 25). For each subinterval
the Gradient test is carried out using the derivative information of X
(line 27) instead of the value of F 0 (X i ) which has not been evaluated 1 . If an interval
2g is not rejected (line 29), F z(X i ) is evaluated using also the value of
Only when the RangeUp test is not satised (line 31), the interval
will be saved in the nal (line 33) or working lists (line 35).
Bottom graphs of Figure 3.1 show how the intervals X 1 and X 2 are built from
X . In case of F (m(X)) > f~ (see lower left hand graph), interval X 1 and X 2 can
be shortened by applying the GradTest procedure. In addition, if f~ < lb(x) and/or
f~ < lb(x) (see lower right hand graph of Figure 3.1), X 1 and/or X 2 can be shortened
again by the GradTest procedure. Notice that in the IAG algorithm, if f~ < lb(x)
then f~ < lb(x), too, and vice versa.
4. Numerical results. The new algorithm IAG has been numerically compared
with the method TIAM on a set of 40 test functions. This set of test functions is
described in Table 4.1 and has been taken from [3, 4, 24]. The search region and
the number of local and global minimizers are shown for all the functions. For both
algorithms the stopping criterion was
Table
4.2 shows numerical comparison between TIAM and IAG. Column NFE
presents the number of Interval Function Evaluations, i.e., the number of F (X) evaluations
plus the number of interval point evaluations F (x). Column NDE shows
the number of Interval Function Evaluations of the derivative F 0 (X). Columns TM
and present NFE +NDE for algorithms TIAM and IAG, respectively. Finally,
column TM=TG provides information of the relative speedup of the IAG algorithm
compared to TIAM algorithm.
The last row of Table 4.2 shows the average values of data at columns TG, TM ,
and TM=TG. It can be seen from Table 4.2 that the ratio TM=TG is always greater
than one, so IAG outperforms TIAM for all the functions. For this set of functions
the speedup TM=TG ranges between [1:22; 6:98] and in average is 1:78. It can also
be seen from Table 4.2 that for those functions where TIAM needs a lot of function
evaluations the largest values of TM=TG were obtained (see functions
which obtained speed up of 4:56 and 6:98, respectively).
Figures
4.1 and 4.2 graphically show how algorithms TIAM and IAG work. The
function presented in Figure 4.1 has only one global minimizer while the
function has two global minimizers. In both gures the left hand
graph refers to algorithm TIAM while right hand graphs depict the performance
of algorithm IAG. For all the graphs the termination criterion was w(X) 0:05.
1 Evaluation of these values will become crucial only for that interval which is chosen later
for subdivision. By using the derivative information of X we avoid additional computations of
which can be useless if the interval will never be chosen for subdivision.
Table
Description of the test functions. Column N shows the number of the function, S the initial
search interval, column M presents the number of local minimizers, and G shows the number of
global minimizers.
Function
26 e sin(3x) [0:2; 7:0] 5 3
28 sin(x) [0; 20] 4 3
29 2(x
Horizontal arrows represent the values of f~ during the execution. Boxes represent
the margins of all the evaluated intervals X and the lower and upper bounds of F (X).
For the IAG algorithm, the new support function d(x) from (2.10) is also shown.
At the top of the graphs, colored boxes represent the set of rejected intervals as
well as intervals which contain a global minimizer. The color of a box species the
criterion responsible for the rejection of that interval (Blue = GradTest procedure,
midpoint and cut-o tests for TIAM and RangeUp
and cut-o tests for IAG, Yellow = Boxes in the nal list Q). From these graphs is
easy to realize how e-cient every rejection criterion is.
L.G. CASADO, I. GARCIA, J.A. MARTINEZ AND YA.D. SERGEYEV
Table
Results of numerical comparison between TIAM and IAG.
9 156 115 41 101 76 25 1.54
28 1.42
28 1.51
14
22 284 209 75 178 132 46 1.6
26 336 268 68 271 214 57 1.24
28 366 292 74 261 206 55 1.40
34 636 460 176 350 259 91 1.82
36 982 711 271 445 331 114 2.21
772.1 282.33 1.78
Figures
4.1 and 4.2 show that for these examples more than 50% of the initial
interval S was rejected due to the GradTest procedure. It is also clearly shown that
TIAM had to evaluate more intervals than IAG. Figure 4.2 shows some intervals
where the best lower bound of f(X) was that obtained by the computation of z(X)
instead of F (X), i.e., that F It should be noticed also that
the TIAM algorithm is unable to take advantage of the information provided by the
evaluation of F (m(X)) when F (m(X)) > f~. In contrast, IAG is able to reduce the
interval even in this case (clearly shown in Figures 4.1 and 4.2).
5. A brief conclusion. In this paper a new way to calculate support functions
for multiextremal univariate functions has been presented. The new support functions
are based on the usual information used in global optimization working with Interval
Analysis: interval evaluations of the objective function at a point, at an interval, and
interval evaluation of the rst derivative of the objective function at an interval, i.e.,
F (x); F (X), and F 0 (X).
Traditional interval analysis global optimization algorithms use this information
Fig. 4.1. Graphical representation for the execution of TIAM (left hand graph) and IAG (right
hand graph) algorithms for function
Fig. 4.2. Graphical representation for the execution of TIAM (left hand graph) and IAG (right
hand graph) algorithms for function
separately: F (x) is used to obtain an upper bound for the global minimum, F (X) is
used to determine a support function { being a constant { for the objective function
nally, F 0 (X) is used in the Monotonicity test for rejecting intervals
which do not contain global minimizers. In contrast, the new method uses the
whole information jointly in order to construct a support function which is closer to
the objective function. The new support function enables to develop more powerful
rejection and bounding criteria and to accelerate the search signicantly. In fact, the
new algorithm works almost two times faster in comparison with a traditional interval
analysis method on a wide set of multiextremal test functions.
The new approach has several possibilities for generalization. First, interval analysis
bounds for F 0 (X) can be substituted by other estimates (for example, slope tools
developed in [25] for non-smooth problems) in order to obtain new support functions.
Second, the new method can be generalized to the multi-dimensional case by diago-
14 L.G. CASADO, I. GARCIA, J.A. MARTINEZ AND YA.D. SERGEYEV
nal approach proposed in [22] or by using adaptively constructed space-lling curves
proposed in [26].
Acknowledgement
. The authors would like to thank E.M.T. Hendrix for his
useful remarks and suggestions.
--R
An algorithm for
State of the Art in Global Optimization.
A global search algorithm using derivatives
Global Optimization Using Interval Analysis
Global optimization of univariate lipschitz functions: 1.
Handbook of Global Optimization
Guaranteed ray intersections with implicit surfaces
Continuous Problems
A method for converting a class of univariate functions into d.
A bridging method for global optimization
An adaptive stochastic global optimization algorithm for one-dimensional functions
Equivalent methods for global optimization
Convergence rates of a global optimization algorithm
Interval analysis
Interval Methods for Systems of Equations
An algorithm for
New computer methods for global optimization
Automatic Slope Computation and its Application in Nonsmooth Global Optimiza- tion
Two methods for solving optimization problems arising in electronic measurement and electrical engineering
Numerical Methods on Multiextremal Problems
Global optimization with non-convex constraints: Sequential and parallel algorithms
An improved univariate global optimization algorithm with improved linear bounding functions
--TR
--CTR
Tams Vink , Jean-Louis Lagouanelle , Tibor Csendes, A New Inclusion Function for Optimization: Kite&mdashlThe One Dimensional Case, Journal of Global Optimization, v.30 n.4, p.435-456, December 2004 | interval arithmetic;global optimization;branch-and-bound |
637568 | Steps toward accurate reconstructions of phylogenies from gene-order data. | We report on our progress in reconstructing phylogenies from gene-order data. We have developed polynomial-time methods for estimating genomic distances that greatly improve the accuracy of trees obtained using the popular neighbor-joining method; we have also further improved the running time of our GRAPPA software suite through a combination of tighter bounding and better use of the bounds. We present new experimental results (that extend those we presented at ISMB'01 and WABI'01) that demonstrate the accuracy and robustness of our distance estimators under a wide range of model conditions. Moreover, using the best of our distance estimators (EDE) in our GRAPPA software suite, along with more sophisticated bounding techniques, produced spectacular improvements in the already huge speedup: whereas our earlier experiments showed a one-million-fold speedup (when run on a 512-processor cluster), our latest experiments demonstrate a speedup of one hundred million. The combination of these various advances enabled us to conduct new phylogenetic analyses of a subset of the Campanulaceae family, confirming various conjectures about the relationships among members of the subset and confirming that inversion can be viewed as the principal mechanism of evolution for their chloroplast genome. We give representative results of the extensive experimentation we conducted on both real and simulated datasets in order to validate and characterize our approaches. | INTRODUCTION
Genome rearrangements. Modern laboratory techniques can yield the ordering and
strandedness of genes on a chromosome, allowing us to represent each chromosome by
an ordering of signed genes (where the sign indicates the strand). Evolutionary events
can alter these orderings through rearrangements such as inversions and transpositions,
collectively called genome rearrangements. Because these events are rare, they give us
information about ancient events in the evolutionary history of a group of organisms. In
consequence, many biologists have embraced this new source of data in their phylogenetic
work [16, 26, 27, 29]. Appropriate tools for analyzing such data remain primitive when
compared to those developed for DNA sequence data; thus developing such tools is becoming
an important area of research, as attested by recent meetings on this topic [14, 15].
Optimization problems. A natural optimization problem for phylogeny reconstruction
from gene-order data is to reconstruct an evolutionary scenario with a minimum number of
the permitted evolutionary events on the tree-what is known as a most parsimonious tree.
Unfortunately, this problem is NP-hard for most criteria-even the very simple problem of
computing the median of three genomes under such models is NP-hard [9, 28]. However,
because suboptimal solutions can yield very different evolutionary reconstructions, exact
solutions are strongly preferred over approximate solutions (see [33]). Moreover, the relative
probabilities of each of the rearrangement events (inversions, transpositions, and inverted
are difficult to estimate. To overcome the latter problem, Blanchette
et al. [5] have proposed using the breakpoint phylogeny, the tree that minimizes the total
number of breakpoints, where a breakpoint is an adjacency of two genes that is present
in one genome but not in its neighbor in the tree. Note that constructing the breakpoint
phylogeny remains NP-hard [7].
Methods for reconstructing phylogenies. Blanchette et al. developed the BPAnalysis
[31] software, which implements various heuristics for the breakpoint phylogeny. We reimplemented
and extended their approach in our GRAPPA [17] software, which runs several
orders of magnitude faster thanks to algorithm engineering techniques [24]. Other heuristics
for solving the breakpoint phylogeny problem have been proposed [8, 12, 13].
Rather than attempting to derive the most parsimonious trees (or an approximation
thereof), we can use existing distance-based methods, such as neighbor-joining (NJ) [30]
(perhaps the most popular phylogenetic method), in conjunction with methods for defining
leaf-to-leaf distances in the phylogenetic tree. Leaf-to-leaf distances that can be computed
in linear time currently include breakpoint distances and inversion distances (the latter
thanks to our new algorithm [3]). We can also estimate the "true" evolutionary distance
(or, rather, the expected true evolutionary distance under a specific model of evolution) by
working backwards from the breakpoint distance or the minimum inversion distance, an
approach suggested by Sankoff [32] and Caprara [10] and developed by us in a series of
papers [23, 34, 35], in which we showed that these estimators significantly improve the
accuracy of trees obtained using the neighbor-joining method.
Results in this paper. This paper reports new experimental results on the use of our distance
estimators in reconstructing phylogenies from gene-order data, using both simulated
and real data. We present several new results (the first two are extensions of the results
we presented at ISMB'01 [23] and the third an extension of the results we presented at
510 MORET, TANG, WANG, AND WARNOW
. Simulation studies examining the relationship between the true evolutionary distance
and our distance estimators. We find that our distance estimators give very good predictions
of the actual number of events under a variety of model conditions (including those that
did not match the assumptions).
. Simulation studies examining the relationship between the topological accuracy of
neighbor-joining and the specific distance measure used: breakpoint, inversion, or one of
our three distance estimators. We find that neighbor-joining does significantly better with
our distance estimators than with the breakpoint or inversion distances.
. A detailed investigation of the robustness of neighbor-joining using our distance estimators
when the assumed relative probabilities of the three rearrangement events are very
different from the true relative probabilities. We find that neighbor-joining using our estimators
is remarkably robust, hardly showing any worsening even under the most erroneous
assumptions.
. A detailed study of the efficacy of using our best distance estimator, significantly
improved lower bounds (still computable in low polynomial time) on the inversion length
of a candidate phylogeny, and a novel way of structuring the search so as to maximize the
use of these bounds. We find that this combination yields much stronger bounding in the
naturally occurring range of evolutionary rates, yielding an additional speedup by one to
two orders of magnitude for our GRAPPA code.
. A successful analysis of a dataset of Campanulaceae (bluebell flower) using a combination
of these techniques, resulting in a one-hundred-million-fold speedup over the original
approach-in particular, we were able to analyze the dataset on a single workstation in
a few hours, whereas our previous analysis required the use of a 512-node supercluster.
Our research combines the development of mathematical techniques with extensive experimental
performance studies. We present a cross-section of the results of the experimental
study we conducted to characterize and validate our approaches. We used a large variety of
simulated datasets as well as several real datasets (chloroplast and mitochondrial genomes)
and tested speed (in both sequential and parallel implementations), robustness (in particular
against mismatched models), efficacy (for our new bounding technique), and accuracy
(for reconstruction and distance estimation).
2. BACKGROUND
2.1. The Nadeau-Taylor model of evolution
When each genome has the same set of genes and each gene appears exactly once, a
genome can be described by an ordering (circular or linear) of these genes, each gene
given with an orientation that is either positive (g i ) or negative (-g i ).
Let G be the genome with signed ordering g 1 , g 2 , . , g k . An inversion between indices
a and b, for a # b, produces the genome with linear ordering
A transposition on the (linear or circular) ordering G acts on three indices, a, b, c, with
a # b and c /
# [a, b], picking up the interval g a , g a+1 , . , g b and inserting it immediately
after g c . Thus the genome G above (with the assumption of c > b) is replaced by
An inverted transposition is a transposition followed by an inversion of the transposed
subsequence.
The (generalized) Nadeau-Taylor model [25] of genome evolution uses only genome
rearrangement events, so that all genomes retain equal content. The model assumes
that the number of each of the three types of events obeys a Poisson distribution on each
edge, that the relative probabilities of each type of event are fixed across the tree, and that
events of a given type are equiprobable. Thus we can represent a Nadeau-Taylor model
tree as a triplet (T , {# e }, (# I , # T , # IT )), where the the triplet (# I , # T , # IT ) defines the
relative probabilities of the three types of events (inversions, transpositions, and inverted
transpositions). For instance, the triplet ( 1, 1, 1) indicates that the three event classes are
equiprobable, while the triplet (1, 0, 0) indicates that only inversions happen.
2.2. Distance-based estimation of phylogenies
Given a tree T on a set S of genomes and given any two leaves i, j in T , we denote by
the path in T between i and j. We let # e denote the number of events (inversions,
transpositions, or inverted transpositions) on the edge e during the evolution of the genomes
in S within the tree T . This is the actual number of events on the edge. We can then define
the of actual distances, which is additive. When given an
additive matrix, many distance-based methods are guaranteed to reconstruct the tree T and
the edge weights (but not the root). Atteson [1] showed that NJ is guaranteed to reconstruct
the true tree T when given an estimate of the additive matrix [# ij ], as long as the estimate
has bounded error:
Theorem 2.1. (From [1]) Let T be a binary tree and let # e and # ij be defined as
described above. Let Let D be any n - n dissimilarity matrix (i.e. D
is symmetric and zero on the diagonal). If
x
then the NJ tree, NJ(D), computed for D is identical to T .
That is, NJ is guaranteed to reconstruct the true tree topology if the input distance matrix
is sufficiently close to an additive matrix defining the same tree topology. Consequently,
techniques that yield a good estimate of the matrix [# ij ] are of significant interest.
Distance measures. The edit distance between two gene orders is the minimum number
of inversions, transpositions, and inverted transpositions needed to transform one gene
order into the other. The inversion distance is the edit distance when only inversions are
permitted. The inversion distance can be computed in linear time [3, 18]; the transposition
distance is of unknown computational complexity [4].
Given two genomes G and G # on the same set of genes, a breakpoint in G is an ordered
pair of genes (g a , g b ) such that g a and g b appear consecutively in that order in G, but
neither (g a , g b ) nor (-g b , -g a ) appear consecutively in that order in G # . The number
of breakpoints in G relative to G # is the breakpoint distance between G and G # . The
breakpoint distance is easily calculated by inspection in linear time. See Figure 1 for an
example of these distances.
Estimations of true evolutionary distances. Estimating the true evolutionary distance
requires assumption about the model; in the case of gene-order evolution, the assumption
512 MORET, TANG, WANG, AND WARNOW
FIG. 1. Example of transposition, inversion, and breakpoint distances. We obtain G 3 from G 0 after 3
inversions (the genes in the inversion interval are highlighted at each step). G 3 can also be obtained from G 0
with one transposition: move the gene segment (6, 7, 8) to the position between genes 1 and 3. d T , d I , and d B
are the transposition, inversion, and breakpoint distances, respectively.
is that the genomes have evolved from a common ancestor under the Nadeau-Taylor model
of evolution. Sankoff's technique [32], applicable only to inversions, calculates this value
exactly, while IEBP [35] and EDE [23], applicable to very general models of evolution,
obtain approximations of these values, and Exact-IEBP [34] calculates the value exactly for
any combination of inversions, transpositions, and inverted transpositions. These estimates
can all be computed in low polynomial time.
2.3. Performance criteria
Let T be a tree leaf-labelled by the set S. Deleting some edge e from T produces a
bipartition # e of S into two sets. Let T be the true tree and let T # be an estimate of T , as
illustrated in Figure 2. The false negatives of T # with respect to T , denoted FN(T , T # ), are
those bipartitions that appear in T that do not appear in T # . The false negative rate is the
number of false negatives divided by the number of non-trivial bipartitions of T . Similarly,
the false positives of T # with respect to T are defined as those bipartitions that appear in
T # but not in T , and the false positive rate is the ratio of false positives to the number
of nontrivial edges. For example, in Figure 2, the edge corresponding to the bipartition
{1, 2, 3 | 4, 5, 6, 7, 8} is present in the true tree, but not in the estimate, and is thus a false
negative. Note that, if both trees are binary, then the number of false negatives equals the
number of false positives. In reporting our results, we will use the false negative rate.254713 57 4
(a) True tree T (b) Estimate T #
FIG. 2. False positive and false negative edges. T is the true tree, T # is a reconstructed tree, and bold edges
are false negative edges in (a) and false positive edges in (b).
3. TRUE DISTANCE ESTIMATORS
3.1. Definitions
Given two signed permutations, we can compute their breakpoint distance or one of the
edit (minimum) distances (for now, the inversion distance), but the actual number of evolutionary
events is not directly recoverable. All that can be done is to estimate that number
under some assumptions about the model of evolution. Thus our true distance estimators
return the most likely number of evolutionary events for the given breakpoint or inversion
distance. We developed three such estimators: the IEBP estimator [35] approximates
the most likely number of evolutionary events working from the breakpoint distance, using
a simplified analytical derivation; the Exact-IEBP estimator [34] refines the analytical
derivation and returns the exact value for that quantity; and the EDE estimator [23] uses
curve fitting to approximate the most likely number of evolutionary events working from
the inversion distance. All three estimators provide considerably more accurate estimates
of true evolutionary distances than the breakpoint or inversion distances (at least for large
moreover, trees obtained by applying the neighbor-joining method to these estimators
are more accurate than those obtained also using neighbor-joining, but based upon
breakpoint distances or inversion distances.
3.2. Comparison of distance estimates
We simulated the Nadeau-Taylor model of evolution under different weight settings to
study the behavior of different distance estimators. The numbers of genes in the datasets
are 37 (animal mitochondria [6]), and 120 (chloroplast genome in many plants [20]). For
each dataset in the experiment, we chose a number between 1 and some upper bound B
as the number of rearrangement events. B is chosen to be 2.5 times the number of genes,
which (according to our experimental results) is enough to make the distance between two
genomes similar to the distance between two random genomes. We then computed the BP
(breakpoint) and INV (inversion) distances and corrected them to get IEBP, Exact-IEBP,
and EDE distances. In Figures 3, 4, and 5, we plot the (unnormalized) computed distances
against the actual number of events-using an inversion-only scenario for the case of 37
genes and a scenario with equally likely events as well as an inversion-only scenario for
the case of 120 genes. These figures indicate that, as expected, BP and INV distances
underestimate the actual number of events-although, when the number of events is low,
they are highly accurate and have small variance. The linear region-the range of the x-coordinate
values where the curve is a straight line-is longer for INV distances than for
BP distances so that INV distances produce unbiased estimates in a larger range than do BP
distances. In contrast, the three estimators provide good estimates on the average, although
their variances increase sharply with the edit distance values. Exact-IEBP produces good
estimates over all ranges (clearly improving on IEBP), while EDE tends to underestimate
the distance unless the scenario uses only inversions.
We also plotted the difference between the actual (true) evolutionary distance and the
minimum or estimated distance, under various models of evolution (mixtures of inver-
sions, transpositions, and inverted transpositions), for two different genome sizes (37 and
120), and for various number of events (rates of evolution). Figure 6 shows the results for
three different models of evolution on 37 and 120 genes, respectively; the values are plotted
in a cumulative fashion: at position x along the horizontal axis, we plotted the mean
absolute difference for all generated pairs with a true evolutionary distance of at most x.
These figures show that Exact-IEBP is usually the best choice, but that EDE distances are
514 MORET, TANG, WANG, AND WARNOW
Breakpoint Distance
Actual
number
of
events
Inversion Distance
Actual
number
of
events
(a) BP distance (b) INV distance
IEBP Distance
Actual
number
of
events
Exact-IEBP Distance
Actual
number
of
events
Distance
Actual
number
of
events
(c) IEBP distance (d) Exact-IEBP distance (e) EDE distance
FIG. 3. Mean and standard deviation plots for the two distances and three distance estimators, for 37 genes
under an inversion-only scenario. The datasets are divided into bins according to their x-coordinate values (the
BP or INV distance).
Actual
number
of
events
Actual
number
of
events
(a) BP distance (b) INV distance
Actual
number
of
events
Actual
number
of
events
Actual
number
of
events
(c) IEBP distance (d) Exact-IEBP distance (e) EDE distance
FIG. 4. Mean and standard deviation plots for the two distances and three distance estimators, for 120 genes
under a scenario in which all three types of events are equally likely. The datasets are divided into bins according
to their x-coordinate values (the BP or INV distance).
Actual
number
of
events
Actual
number
of
events
(a) BP distance (b) INV distance
Actual
number
of
events
Actual
number
of
events
Actual
number
of
events
(c) IEBP distance (d) Exact-IEBP distance (e) EDE distance
FIG. 5. Mean and standard deviation plots for the two distances and three distance estimators, for 120 genes
under an inversion-only scenario. The datasets are divided into bins according to their x-coordinate values (the
BP or INV distance).
Actual number of events
Absolute
difference
Actual number of events
Absolute
difference
Actual number of events
Absolute
difference
(a) inversions only (b) transpositions only (c) equally likely events
Actual number of events
Absolute
difference
Actual number of events
Absolute
difference
Actual number of events
Absolute
difference
(a) inversions only (b) transpositions only (c) equally likely events
FIG. 6. The mean difference between the true evolutionary distance and our five distances estimates, under
three models of evolution, plotted as a function of the tree diameter, for 37 genes (top) and 120 genes (bottom).
516 MORET, TANG, WANG, AND WARNOW
nearly as good (and occasionally better) when the model uses only inversions-although
the variance for larger distances is high, making it difficult to draw firm conclusions. IEBP
and EDE clearly improve on BP and INV.
3.3. Neighbor-joining performance
We conducted a simulation study to compare the performance of NJ using the same five
distances. In Figure 7, we plot the false negative rate against the normalized pairwise inversion
distances, under three different model weights settings: (1, 0,
(0, 1, (transpositions only), and ( 1, 1, 1) (all three events equally likely). In each plot
we pool the results for the same model weight but different numbers of genomes: 10, 20,
40, 80, and 160. Note that NJ(EDE) is remarkably robust: even though EDE was engineered
for an inversion-only scenario, it can handle datasets with a significant number
of transpositions and inverted transpositions almost as well. NJ(EDE) recovers 90% of
the edges even for the nearly saturated datasets where the maximum pairwise inversion
distance is close to 90% of the maximum value. That NJ(EDE) improves on NJ(IEBP),
in spite of the fact that IEBP is a comparable estimator, may be attributed to the greater
precision (smaller variance) of EDE for smaller distances-most of the choices made in
neighbor-joining are made among small distances, where EDE is more likely to return an
approximation within a small factor of the true distance. Exact-IEBP, the most expensive
of our three estimators to compute, yields the second best performance, although the difference
between the error rates of NJ(EDE) and NJ(Exact-IEBP) is too small to be statistically
significant.
Normalized Maximum Pairwise Inversion Distance
False
Negative
Rate
Normalized Maximum Pairwise Inversion Distance
False
Negative
Rate
Normalized Maximum Pairwise Inversion Distance
False
Negative
Rate
(a) inversions only (b) transpositions only (c) equally likely events
FIG. 7. False negative rates of NJ methods under various distance estimators as a function of the maximum
genomes. (Results for different numbers of genomes are
pooled into a single figure when the model weights are identical.)
3.4. Robustness of distance estimators
As discussed earlier, estimating the true evolutionary distance requires assumptions
about the model parameters. In the case of EDE, we assume that evolution proceeded
through inversions only-so how well does NJ(EDE) perform when faced with a dataset
produced through a combination of transpositions and inverted transpositions? In the case
of the two IEBP methods, the computation requires values for the respective rates of inver-
sion, transposition, and inverted transposition, respectively, which obviously leaves a lot of
room for mistaken assumptions. We ran a series of experiments under conditions similar
to those shown earlier, but where we deliberately mismatched the evolutionary parameters
used in the production of the dataset and those used in the computation of the distance
estimates used in NJ. Figure 8 shows the results for the Exact-IEPB estimator (results for
1515Normalized Maximum Pairwise Inversion Distance
False
Negative
Rate
1515Normalized Maximum Pairwise Inversion Distance
False
Negative
Rate
1515Normalized Maximum Pairwise Inversion Distance
False
Negative
Rate
(a) inversions only (b) transpositions only (c) equally likely events
FIG. 8. Robustness of the Exact-IEBP method with respect to model parameters. Triples in the legend
indicate the model values used in the Exact-IEBP method.
the other estimators are similar), indicating that our estimators, when used in conjunction
with NJ, are remarkably robust in the face of erroneous model assumptions.
4. MAXIMUM PARSIMONY AND TOPOLOGICAL ACCURACY
The main goal of phylogeny reconstruction is to produce the correct tree topology. Two
basic approaches are currently used for phylogeny reconstruction from whole genomes:
distance-based methods such as NJ applied to techniques for estimating distances and
"maximum parsimony" (MP) approaches, which attempt to minimize the "length" of the
tree, for a suitably defined measure of the length.
We examine two specific MP problems in this section: the breakpoint phylogeny prob-
lem, where we seek to minimize the total number of breakpoints over all tree edges, and the
phylogeny problem, where we seek to minimize the total number of inversions.
We want to determine, using a simulation study, whether topological accuracy is improved
by reducing the number of inversions or the number of breakpoints. If possible, we also
want to determine whether the breakpoint phylogeny problem or the inversion phylogeny
problem are topologically more accurate under certain evolutionary conditions, and if so,
under which conditions.
We ran a large series of tests on model trees to investigate the hypothesis that minimizing
the total breakpoint distance or inversion length of trees would yield more topologically
accurate trees. We ran NJ on a total of 209 datasets with both inversion and breakpoint
distances. Each test consists of at least 12 data points, on sets of up to 40 genomes. We
used two genome sizes (37 and 120 genes, representative of mitochondrial and chloroplast
genomes, respectively) and various ratios of inversions to transpositions and inverted trans-
positions, as well as various rates of evolution. For each dataset, we computed the total
inversion and breakpoint distances and compared their values with the percentage of errors
(measured as false negatives).
We used the nonparametric Cox-Stuart test [11] for detecting trends-i.e., for testing
whether reducing breakpoint or inversion distance consistently reduces topological errors.
Using a 95% confidence level, we found that over 97% of the datasets with inversion
distance and over 96% of those with breakpoint distance exhibited such a trend. Indeed,
even at the 99.9% confidence level, over 82% of the datasets still exhibited such a trend.
Figures
show the results of scoring the different NJ trees under the two optimization
criteria: breakpoint score and inversion length of the tree. In general, the relative
ordering and trend of the curves agree with the curves of Figure 7, suggesting that de-
11.021.061.1Normalized Maximum Pairwise Inversion Distance
Relative
Breakpoint
Score
(a) breakpoint score, 37 genes
11.021.061.1Normalized Maximum Pairwise Inversion Distance
Relative
Inversion
Length
(c) inversion length, 37 genes
11.021.061.1Normalized Maximum Pairwise Inversion Distance
Relative
Breakpoint
Score
(b) breakpoint score, 120 genes
11.021.061.1Normalized Maximum Pairwise Inversion Distance
Relative
Inversion
Length
(d) inversion length, 120 genes
FIG. 9. Scoring NJ methods under various distance estimators as a function of the maximum pairwise
inversion distance for 10, 20, and 40 genomes. Plotted is the ratio of the NJ tree score to the model tree score
(breakpoint or inversion) on an inversion-only model tree.
11.021.061.1Normalized Maximum Pairwise Inversion Distance
Relative
Breakpoint
Score
(a) breakpoint score, 37 genes
11.021.061.1Normalized Maximum Pairwise Inversion Distance
Relative
Inversion
Length
(c) inversion length, 37 genes
11.021.061.1Normalized Maximum Pairwise Inversion Distance
Relative
Breakpoint
Score
(b) breakpoint score, 120 genes
11.021.061.1Normalized Maximum Pairwise Inversion Distance
Relative
Inversion
Length
(d) inversion length, 120 genes
FIG. 10. Scoring NJ methods under various distance estimators as a function of the maximum pairwise
inversion distance for 10, 20, and 40 genomes. Plotted is the ratio of the NJ tree score to the model tree score
(breakpoint or inversion) on a model tree where the three classes of events are equiprobable.
creasing the number of inversions or breakpoints leads to an improvement in topological
accuracy. The correlation is strongest for the 120-gene case; this may be because, for the
same number of events but a larger number of genes, the rate of evolution effectively goes
down and overlap of events becomes less likely. Finally, this trend still holds under the
other evolutionary models (such as when only transpositions occur).
5. SEARCHING FOR MAXIMUM PARSIMONY TREES
5.1. The lower bound and its use
The following theorem is well known:
Theorem 5.1. Let d be a n - n matrix of pairwise distances between the taxa in a
set let T be a tree leaf-labelled by the taxa in S; and let w be an edge-weighting on T ,
so that we have
a circular ordering of the leaves of T , under some planar embedding of T , then we have
This corollary immediately follows:
Corollary 5.1. Let d be the matrix of minimum distances between every pair of
genomes in a set S, let T be a fixed tree on S, and let 1, 2, . , n be a circular ordering of
leaves in T under some planar embedding of T . Then the length of T is at least 1(d 1,2
This corollary forms the basis of the old "twice around the tree" heuristic for the TSP
based on minimum spanning trees [19]. Note that the theorem and its corollary hold for
any distance measure that obeys the triangle inequality.
In earlier work [22, 24], we used these bounds in a simple manner to reduce the cost of
searching tree space.
. We obtain an initial upper bound on the minimum achievable inversion length by
using NJ with inversion distances. This upper bound is updated every time the search finds
a better tree.
. Each tree we examine is presented in the standard nested-parentheses format (the
nexus format [21]); this format defines a particular circular ordering of the leaves. We use
that ordering to compute the lower bound of Corollary 5.1, again using inversion distances.
If the lower bound exceeds the upper bound, the tree can be discarded.
This bounding may reduce the running time substantially, because the bound can be computed
very efficiently, whereas scoring a tree with a tool like GRAPPA [17] involves solving
numerous TSP instances. However, when the rate of evolution is high for the size of the
genome, the bound often proves loose-the bound is exact when distances are additive, but
high rates of evolution produce distances that are much smaller than the additive value.
We thus set about improving the bound as well as how it is used in the context of
GRAPPA to prune trees before scoring them. Our new results come from three separate
ideas: (i) tighten the lower bound; (ii) return more accurate scores for trees with large edge
lengths; and (iii) process the trees so as to take better advantage of the bounds.
5.2. Tightening the bound
The particular circular order defined by the tree description is only one of a very large
number of circular orderings compatible with that tree: since the tree has no internal ordering
(i.e., no notion of left vs. right child), swapping two subtrees does not alter the
520 MORET, TANG, WANG, AND WARNOW
phylogeny, but does yield a different circular ordering. Any of these orderings defines
a valid lower bound, so that we could search through all orderings and retain the largest
bound produced in order to tighten the bound. Unfortunately, the number of compatible
circular orderings is exponential in the number of leaves, so that a full search is too ex-
pensive. We developed and tested a fast greedy heuristic, swap-as-you-go, that provides a
high-quality approximation of the optimal bound. Our heuristic starts with the given tree
and its implied circular ordering. It then traverses the tree in preorder from an arbitrarily
chosen initial leaf, deciding locally at each node whether or not to swap the children by
computing the score of the resulting circular ordering and moving towards larger values, in
standard greedy fashion. With incremental computations, such a search takes linear time,
because each swap only alters a couple of adjacencies, so that the differential cost of a
swap can be computed in constant time. In our experiments, the resulting bound is always
very close, or even equal, to the optimal bound and much better than the original value in
almost all cases.
We tested a related bounding technique proposed by Bryant [8], but found it to be very
slow (in order to yield reasonably tight bounds, it requires the introduction of Lagrangian
variables and the solution of a system of linear equations to determine their values) and
always (in our experiments) dominated by our algorithm. Since we use bounding strictly
to reduce the total amount of work, it is essential that any lower bound be computable with
very little effort.
5.3. Accurate scoring of long edges
Long edges in the tree suffer from the same problem as large leaf-to-leaf distances: they
are seriously underestimated by an edit distance computation. Thus we decided to use
the same remedy presented in the first part of this paper, by correcting edit distances with
one of our true distance estimators. We chose the EDE estimator, because it offers the best
tradeoff between accuracy and computational cost of our three estimators and used it within
GRAPPA wherever distances are computed: in computing the distance matrix for NJ, in
computing circular lower bounds from that matrix, and, more importantly, in computing
the distance along each edge and in computing the median-of-three that lies at the heart of
the GRAPPA approach [24]. Because a distance estimator effectively stretches the range
of possible values and because that stretch is most pronounced when the edit distances are
already large (the worst case for our circular bounds), using a distance estimator yields
significant benefits-a number of our more challenging instances suddenly became very
tractable with the combination of EDE distances and our improved circular bound.
5.4. Layered search
We added a third significant improvement to the bounding scheme. Since the bound
itself can no longer be significantly improved, obtaining better pruning requires better use
of the bounds we have. Our original approach to pruning [24] was simply to enumerate all
trees, keeping the score of the best tree to date as an upper bound and computing a lower
bound for each new tree to decide whether to prune it or score it. In this approach, each
tree is generated once, bounded once, and scored at most once, but the upper bound in
use through the computation may be quite poor until close to the end if optimal and near-optimal
trees appear only toward the end of the enumeration-in which case the program
must score nearly every tree. To remedy this problem, we devised a novel and radically
different approach, which is motivated by the fact that generating and bounding a tree is
very inexpensive, whereas scoring one, which involves solving potentially large numbers
of instances of the Travelling Salesperson Problem, is very expensive.
Our new approach, which we call a layered search, still bounds each tree once and still
scores it at most once, but it typically examines the tree more than once; it works as follows.
. In a first phase, we compute the NJ tree (using EDE distances) and score it to obtain
an initial upper bound, as in the original code.
. In a second phase, every tree in turn is generated, its lower bound computed as described
above, and that bound compared with the cost of the NJ tree. Trees not pruned
away are stored, along with their computed lower bound, in buckets ordered by the value
of the lower bound. 3
. We then begin the layered search itself, which proceeds on the principle that the lower
bound of a tree is correlated with the actual parsimony score of that tree. The search looks
at each successive bucket of trees in turn, scoring trees that cannot be pruned through their
lower bound and updating the upper bound whenever a better score is found.
This search technique is applicable to any class of optimization problems where the cost
of evaluating an object in the solution space is much larger than the cost of generating that
object and works well whenever there is a good correlation between the lower bound and
the value of the optimal solution.
Our experiments indicate that, unless the interleaf distances are all nearly maximal (for
the given number of genes), the correlation between our lower bound and the parsimony
score is quite strong, so that our layered search strategy very quickly reduces the upper
bound to a score that is optimal or nearly so, thereby enabling drastically better pruning-
as detailed below, we frequently observed pruning rates of over 99.999%.
5.5. An experimental assessment of bounding
We measured the percentage of trees that are pruned through bounding (and thus not
scored) as a function of the three model parameters: number of genomes, number of genes,
and number of inversions per edge. We used an inversion-only scenario as well as one with
approximately half inversions and half transpositions or inverted transpositions. Our data
consisted of two collections of 10 datasets each for a combination of parameters. The
number of genomes was 10, 20, 40, 80, and 160, the number of genes was 10, 20, 40, 80,
and 160, and the rate of evolution varied from 2 to 8 events per tree edge, for a total of 75
parameter combinations and 1,500 datasets. We used EDE distances to score edges and our
"swap-as-you-go" bounding computation, but not layered search-because layered search
requires enumerating all trees up front, something that is simply not possible for 20 or
more genomes.
For each data set with 10 genomes, GRAPPA tested all trees, scored and updated the
upper bound if necessary, and kept statistics on the pruning rate. For 20 or more genomes,
the number of trees is well beyond the realm of enumeration-with 20 genomes, we have
For these cases, we began by running GRAPPA for 6 hours to score
as many trees as possible, then used the best score obtained in this first phase as an upper
bound in a second phase where we ran another 6 hours during which a random selection
3 The required storage can exceed the memory capacity of the machine, in which case we store buckets on disk
in suitably-sized blocks. The cost of secondary memory access is easily amortized over the computation.
522 MORET, TANG, WANG, AND WARNOW
# genes
FIG. 11. Percentage of trees eliminated through bounding for various numbers of genes and genomes and
three rates of evolution.
of trees are bounded using our method and their bounds compared to the upper bound
obtained in the first phase. The result is a (potentially very) pessimistic estimate of the
pruning rate.
Figure
11 shows the percentage of trees pruned away by the circular lower bound; in
the table, the parameter r denotes the expected number of inversions per edge used in the
simulated evolution. (We could not run enough tests for the setting of 160 genes and
because merely scoring such a tree accurately can easily take days of computation-the
instances of the TSP generated under these circumstances are very time-consuming.) In
comparison with the similar table for our earlier approach [23], our new approach shows
dramatic improvement, especially at high rates of evolution. We found that most circular
orderings in datasets of up to 20 genomes were eliminated. However the table also shows
that the bounding does not eliminate many trees in datasets with 40 or more genomes-
not unless these datasets have large numbers of genes. The reason is clear: the number
of genes dictates the range of values for the pairwise distances and thus also for the tree
score-in terms of inversion distances, for instance, we have roughly mn possible tree
scores, where m is the number of genes and n the number of genomes; yet the number of
distinct trees is (2n - 5)!!, a number so large that, even when only a very small fraction of
the trees are near optimal, that fraction contains so many trees that they cannot efficiently
be distinguished from others with only mn buckets. Of course, GRAPPA in 6 hours can
only examine a vanishingly small fraction of the tree space, so that the upper bound it uses
is almost certainly much too high. In contrast with these findings, our layered method gave
us pruning rates of 90% in the 10-genome case for genes, a huge
improvement over the complete failure of pruning by the normal search method.
6. A TEST OF OUR METHODS ON REAL
We repeated our analysis of the Campanulaceae dataset, which consists of 13 chloroplast
genomes, one of which is the outgroup Tobacco, but this time to reconstruct the EDE
(as opposed to the breakpoint or inversion) phylogeny. Each of the 13 genomes has 105
segments and, though highly rearranged, has what we consider to be a low rate of
evolution. In our previous analysis, we found that GRAPPA, with our first bounding ap-
proach, pruned about 85% of the trees, for a substantial speedup (on the order of 5-10)
over a version without pruning. By using EDE distances and our improved bound compu-
tation, we increased this percentage to over 95%, for another substantial speedup of 5-10.
By adding layered search, however, we managed to prune almost all of the 13.75 billion
trees-all but a few hundred thousand, which were quickly scored and dispatched, for a
further speedup of close to 20. The pruning rate was over 99.99%, reducing the number
of trees that had to be scored by a factor of nearly 800. As a result, the same dataset that
required a couple of hours on a 512-processor supercluster when using our first bounding
strategy [2, 22, 24] can now be run in a few hours on a single workstation-and in one
minute on that same cluster. In terms of our original comparison to the BPAnalysis
code, we have now achieved a speedup (on the supercluster) of one hundred million! We
also confirmed the results of our previous analysis-that is, the trees returned in our new
analysis, which uses EDE distances, match those returned in an analysis that had used
minimum inversion distances, a pleasantly robust result.
The speedup obtained by bounding depends on two factors: the percentage of trees that
can be eliminated by the bounding and the difficulty of the TSP instances avoided by using
the bounds. As Table 11 shows, when the rate of evolution is not too high, close to 100%
of the trees can be eliminated by using the bounds. However, the TSP instances solved in
GRAPPA can be quite small when the evolutionary rate is low, due to how we compress
data (see [24]). Consequently, the speedup also depends on the rate of evolution, with
lower rates of evolution producing easier TSP instances and thus smaller speedups. The
Campanulaceae dataset is a good example of a dataset that is quite easy for GRAPPA, in
the sense that it produces easy TSP instances-but even in this case, a significant speedup
results. More generally, the speedup increases with larger numbers of genomes and, to a
point, with higher rates of evolution. When one is forced to exhaustively search tree space,
these speedups represent substantial savings in time.
7. CONCLUSIONS AND FUTURE WORK
We have described new theoretical and experimental results that have enabled us to analyze
significant datasets in terms of inversion events and that also extend to models incorporating
transpositions. This work is part of an ongoing project to develop fast and robust
techniques for reconstructing phylogenies from gene-order data. The distance estimators
we have developed clearly outperform straight distance measures in terms of both accuracy
(when used in conjunction with a distance-based method such as neighbor-joining) and efficiency
(when used with a search-based optimization method such as implemented in our
GRAPPA software suite). Our current software suffers from several limitations, particularly
its exhaustive search of all of (constrained) tree space. However, the bounds we have
described can be used in conjunction with branch-and-bound (based on inserting leaves
into subtrees or extending circular orderings) as well as in heuristic search techniques.
ACKNOWLEDGMENTS
We thank R. Jansen for introducing us to this research area, and D. Sankoff and J. Nadeau for inviting us to the
DCAF meeting, during which some of the ideas in this paper came to fruition. This work is supported in part by
National Science Foundation grants ACI 00-81404 (Moret), DEB 01-20709 (Moret and Warnow), EIA 01-13095
and by the David and
Lucile Packard Foundation (Warnow).
--R
The performance of the neighbor-joining methods of phylogenetic reconstruction
GRAPPA runs in record time.
A linear-time algorithm for computing inversion distance between signed permutations with an experimental study
Sorting permutations by transpositions.
Breakpoint phylogenies.
Gene order breakpoint evidence in animal mitochondrial phy- logeny
The complexity of the breakpoint median problem.
A lower bound for the breakpoint phylogeny problem.
Formulations and hardness of multiple sorting by reversals.
Experimental and statistical analysis of sorting by reversals.
Practical Nonparametric Statistics
An empirical comparison of phylogenetic methods on chloroplast gene order data in Campanulaceae.
A new fast heuristic for computing the breakpoint phylogeny and experimental phylogenetic analyses of real and synthetic data.
Workshop on Gene Order Dynamics
DIMACS Workshop on Whole Genome Comparison.
Use of chloroplast DNA rearrangements in reconstructing plant phylogeny.
GRAPPA: Genome Rearrangements Analysis under Parsimony and other Phylogenetic Algorithms.
Transforming cabbage into turnip (polynomial algorithm for genomic distance problems).
The travelling salesman problem and minimum spanning trees.
Personal communication
An extensible file format for systematic infor- mation
New approaches for reconstructing phylogenies based on gene order.
A new implementation and detailed study of breakpoint analysis.
Lengths of chromosome segments conserved since divergence of man and mouse.
Chloroplast DNA systematics: a review of methods and data analysis.
Chloroplast and mitochondrial genome evolution in land plants.
The median problems for breakpoints are NP-complete
Chloroplast DNA evidence on the ancient evolutionary split in vascular land plants.
The neighbor-joining method: A new method for reconstructing phylogenetic trees
Multiple genome rearrangement and breakpoint phylogeny.
Probability models for genome rearrangements and linear invariants for phylogenetic inference.
Phylogenetic inference.
Exact-IEBP: a new technique for estimating evolutionary distances between whole genomes.
Estimating true evolutionary distances between genomes.
--TR
Transforming cabbage into turnip
Formulations and hardness of multiple sorting by reversals
Probability models for genome rearrangement and linear invariants for phylogenetic inference
Sorting permutations by tanspositions
Estimating true evolutionary distances between genomes
High-Performance Algorithm Engineering for Computational Phylogenetics
A New Fast Heuristic for Computing the Breakpoint Phylogeny and Experimental Phylogenetic Analyses of Real and Synthetic Data
A Lower Bound for the Breakpoint Phylogeny Problem
--CTR
Bernard M. E. Moret , Li-San Wang , Tandy Warnow, Toward New Software for Computational Phylogenetics, Computer, v.35 n.7, p.55-64, July 2002
Niklas Eriksen, Reversal and transposition medians, Theoretical Computer Science, v.374 n.1-3, p.111-126, April, 2007
Bernard M. E. Moret , Tandy Warnow, Reconstructing optimal phylogenetic trees: a challenge in experimental algorithmics, Experimental algorithmics: from algorithm design to robust and efficient software, Springer-Verlag New York, Inc., New York, NY, 2002 | breakpoint analysis;evolutionary distance;genome rearrangement;inversion distance;distance estimator;ancestral genome |
637759 | Experiences in modeling and simulation of computer architectures in DEVS. | The use of traditional approaches to teach computer organization usually generates misconceptions in the students. The simulated computer ALFA-1 was designed to fill this gap. DEVS was used to attack this complex design of the chosen architecture, allowing for the definition and integration of individual components. DEVS also provided a formal specification framework, which allowed reduction of testing time and improvement of the development process. Using ALFA-1, the students acquired some practice in the design and implementation of hardware components, which is not usually achievable in computer organization courses. | Figure
1. Organization of the Integer Unit.
This RISC processor is provided with 520 integer registers. Eight of them are global (RegGlob, shared by
every procedure), and the remaining 512 are divided in windows of 24 registers each (RegBlock).
Each window includes input, output and local registers for every procedure that has been executed recently.
When a routine begins, new registers are reserved (8 local and 8 output), and the 8 output records of the
calling procedure are used as inputs. A specialized 5-bit register, called CWP (Circular Window Pointer)
marks the active window. Every time a new procedure starts, CWP is decremented.
Figure
2. Organization of the processor's registers.
Besides these general purpose registers, the architecture includes:
PCs: the processor has two program counters. The PC contains the address of the next instruction. The
stores the address of the PC after the execution of the present instruction. Each instruction
cycle finishes by copying the nPC to the PC, and adding 4 bytes (one word) to the nPC. If the instruction is
a conditional branch, nPC is assigned to PC, and nPC is updated with the jump address (if the jump condition
is valid).
Y: is used by the product and division operations;
BASE and SIZE: The memory is considered flat (that is, neither segmentation nor pagination mechanisms
are included). Likewise, multiprogramming is not supported. The BASE register points to the lowest
address a program can access. The SIZE stores the maximum size available for the program.
PSR (Processor Status Register): stores the current status for the program. It is interpreted as follows.
Bits Content Description
Reserved
Negative 1 when the result of the last operation is negative
22 Z - Zero 1 when the result of the last operation is zero
when the result of the last operation is overflow
when the result of the last operation carried one bit
19.12 Reserved
Lowest interrupt number to be serviced.
6 PS - Previous State Last mode.
Enable Trap 1=Traps enabled; 0=Traps disabled.
Current Window Pointer Points to the current register window.
Table
1. Contents of the Process Status Register
WIM (Window Invalid Mask): this 32-bit register (one bit per window) is used to avoid overwriting a
window in use by another procedure. When CWP is decremented, these circuits verify if the WIMbit is active
for the new window. In that case, an interrupt is raised and the interrupt service routine stores the content
of the window in memory. Usually, WIMonly has one bit in 1 marking the oldest window.
TBR (Trap Base Register): it points to the memory address storing the position of a trap routine.
Bits Content Description
base address Base address of the Trap table
11.4 Trap Type Trap to be serviced
3.0 Constant (0000)
Table
2. Contents of the Trap Base Register
The first 20 bits (Trap Base Address) store the base address of the trap table. When an interrupt request is
received, the number of the trap to be serviced is stored in the bits 11.4. Therefore, the TBR points to the
table position containing the address of the service routine. The last 4 bits in 0 guarantees at least 16 bytes
to store each routine.
When the instruction set level of the SPARC architecture is analyzed, we see that each instruction has a
fixed size of 32 bits. Memory operands may be 8, 16 or 32 bits. There are basic Load/Store operations,
classified according to the size and sign of their operands.
Arithmetic and Boolean operations include add, and, or, div, mul, xor, xnor, and shift. These are able to
change the PSR, according to the operation code used. Several jump instructions are available, including
relative jumps, absolute jumps, traps, calls, and return from traps. Other instructions include the movement
of the register window, NOPs, and read/write operations on the PSR.
Multiplication uses 32-bit operands, producing 64-bit results. The most significant 32 bits are stored in the
Y register, and the remaining in the ALU-RES register. Integer division operations take a 64-bit dividend
and a 32-bit divisor, producing a 32-bit result. The Y register stores the most significant bits of the divi-
dend. One ALU input register stores the least significant bits of the dividend, and the other, the divisor. The
integer result is stored in the ALU-RES register, and the remainder in the Y register. Most instructions are
carried out by the ALU, whose structure is depicted in the following figure. It includes two multiplexers
connected to the ALU, Multiplier/Divider unit and shifter.
Figure
3. Organization of the ALU.
There are two execution modes: User and Kernel. Certain instructions can only be executed in Kernel
mode. Also, the Base and Size registers are used only when the program is running in User mode.
The CPU executes under the supervision of the Control Unit . It receives signals from the rest of the processor
using 64 input bits (organized in 5 groups: the Instruction Register, the PSR, BUS_BUSY_IN,
BUS_DACK_IN, and BUS_ERR). Its outputs are sent using 70 lines organized in 59 groups. Some of them
include reading/writing internal registers, activating lines for the ALU or multiplexers. Also, connections
with the PC, nPC, Trap controller and PSR registers are included. Finally, the Data, Address and Control
buses can be accessed.
The memory is organized using byte addressing and Little-Endian to store words. The processor issues a
memory access operation by writing an address (and data, if needed) in the bus. Then, it turns on the AS
(Address Strobe) signal, interpreted by the memory as an order to start the operation. The memory uses the
address available and analyzes the RD_WR line to see which operation was asked. If a read was issued, one
word (4 bytes) is taken from the specified address and sent through the data lines. In a write operation the
address stored in the Byte Select register (lines BSEL0.3) defines the byte to be accessed in the word
pointed to by the Address register. If an address is wrong, the ERR line is turned on. A Data Acknowledge
(DTACK) is sent when the operation finished.
The system components are interconnected using a Bus (see Figure 4). The bus Masters use the BGRANT
(Bus Grant) and IACK (IRQ Acknowledgment) lines to be connected to the two devices with the following
lower and upper priorities. The device with the highest priority is connected to a constant "1" signal in the
BGRANT line. The BGRANT signal is sent to the lower priority devices up to the arrival to a device that
requested the bus. When the device finishes the transfer, a IACK is transmitted. Input/output operations are
memory mapped. Each device have a fixed set of addresses. Data written in those addresses are interpreted
as an instructions for a device. Fifteen IRQ lines (IRQ1.IRQ15) are provided, and devices are connected to
these lines. Higher priority devices are connected to lower IRQs.
Figure
4. Organization of the Bus.
Finally, an external cache memory was defined. The generic structure for the cache controller is defined in
figure 5. The design and implementation of these modules were not included in the original version of Alfa-
1. They were defined as an assignment done by undergraduate students, following the procedures that will
be presented in the following sections. In a first stage, the circuits were tested separately, different algorithms
were implemented, and finally the device was integrated into the architecture. Each model was developed
as a DEVS model that was integrated into a coupled model. This extension to the original architecture
(which not will be explained in detail) shows some of the capabilities for extensibility and modifiability
of Alfa-1.
Figure
5. Organization of the Cache memory.
3. IMPLEMENTING THE ARCHITECTURE AS DEVS MODELS
The architecture presented in the previous section was completely implemented using CD++. First, the behavior
of each component was carefully specified, analyzing inputs, outputs and timing for each element.
The specification also provided test cases. Then, each component was defined as a DEVS following the
specification. After, each model was implemented in CD++, including an experimental framework following
the test cases defined in the specification. Finally, the main model was built as a coupled model connecting
all the submodels previously defined. This model follows the design presented in the figure 1, and
its detailed definition can be found in [49].
Two implementations were considered. First, we reproduced the basic behavior of each circuit, coded as
transition functions. Then, some of them were implemented in detail using Boolean logic. The basic building
blocks were developed as atomic models, coupling them using digital logic concepts. In this way, two
different abstraction levels were provided. Depending on the interest, each of them can be used. Once thoroughly
tested, the basic models were integrated into higher level modules up to completing the definition of
the architecture. The following sections will be devoted to present some of the components implemented as
assignments done by our students. We show how different abstraction levels can be modeled, and present
examples of modifiability of Alfa-1.
3.1. Inc/Dec
As explained earlier, we use 520 general purpose registers organized as overlapped windows. In a given
time, only one window can be active. The Inc/Dec model is the component that chooses the active window
using a 5-bit CWP register. The models that are part of the CWP logic are shown in the figure 2. The CWP
is incremented or decremented, and its value (stored in a d-latch represented as another DEVS) is received
through the lines OP0-OP4. The outputs are transmitted through the lines RES0-RES4. This atomic model
can be defined as:
The behavior for the transition functions can be informally defined as follows:
{
When (x is received in the port y)
If increment
else decrement
dint() { passivate; }
{
If (RES0.4 is different to the last output) send RESP0.4 through the ports OP0.4;
Figure
6. Behavior of the transition functions for the INC/DEC model [50].
The FCOD value is used to tell if the value must be incremented or decremented. The ALU model is used
to do this operation. Here we can see that, when an external event arrives, the hold_in function is activated.
This macro represents the behavior of the DEVS time advance function (D), and it is in charge of manipulating
the sigma variable. This is a state variable predefined for every DEVS model, which represents the
remaining time up to the next scheduled internal event. The model will remain in the current state during
this time, after which an output and internal transition functions are activated. The hold_in macro makes
this timing definition easier. Passivate is another macro, which uses an infinite sigma and puts the model in
passive phase (hold_in(passive, infinite)).
The following figure shows the implementation of these functions using CD++. As we can see, the external
transition function (dext) receives five operands as inputs, together with a function code. According to this
code, the parameter is incremented or decremented. After, the model keeps the present value during a delay
related with the circuit operation. The output function (l) is activated, and if the circuit changed its state the
present value is transmitted. Then, the internal transition function (dint) passivates the model (that is, an
internal event with infinite delay is scheduled, waiting for the next input). The constructor allows to specify
the model's name, input/output ports, and parameters.
As we can see, the definition of a DEVS atomic model is simpler than the use of any standard programming
language. We have explained some of the advantages of using DEVS in section 1, but in this case, we can
see how to apply it to build our models. DEVS provides a interface, consisting of only four functions to be
programmed. This modular definition is independent of the simulator, and it is repeated for every model.
Therefore, one can focus in the model development. The user only concentrate in the behavior under external
events, the outputs that must be sent to other submodels, and the occurrence of internal events. Behavior
for every model is encapsulated in these functions, together with the elapsed time definition. Testing patterns
can be easily created, as the model can only activate these functions.
IncDec::IncDec( const string &name
string time( MainSimulator::Instance().getParameter(
Model &IncDec::externalFunction( const ExternalMessage &msg ) {
// Check the input ports, assigning the input values.
if (_FCOD == 1) {
for (int i=0;
Increment
Increment the va value useing the ALU
for (int
else
{ // Decrement
for (int i=0; i<=4; i++)
Decrement the v value useing the ALU
for (int
this->holdIn(active, preparationTime); // Schedule a delay for the circuit
return *this;
Model &IncDec::internalFunction( const InternalMessage & ) {
When the delay is consumed, activate the output
return *this ;
Model &IncDec::outputFunction( const InternalMessage &msg ) {
if (_RES[0]!=_OLD[0] || _RES[1]!=_OLD[1] || _RES[2]!=_OLD[2] ||
{
sendOutput(msg.time(), RES0, _RES[0] );
sendOutput(msg.time(), RES1, _RES[1] );
sendOutput(msg.time(), RES2, _RES[2] );
sendOutput(msg.time(), RES3, _RES[3] );
sendOutput(msg.time(), RES4, _RES[4] );
return *this ;
Figure
7. INC/DEC model definition: transition functions[50].
Once we have defined the atomic model, we can test it by injecting input values and inspecting the outputs.
An experimental frame can be built, including pairs of input/output values to test the model automatically.
In any case, we have to build a coupled model including the model to be tested. This is defined as follows:
preparation
Figure
8. INC/DEC coupled model definition [50].
These definitions follow the DEVS specifications. They are defined by its components (in this case, I_D, an
instance of the IncDec model) and external parameters. Then, the links define the influencees and translation
functions including the input/output ports for the model. In this case, the I_D model is related with the
Top model, using the input/output ports defined earlier.
3.2. RegGlob
This model defines the behavior of the global registers. It keeps the contents of the 8 global registers, allowing
read/write operations on them. Two auxiliary state variables, olda and oldb, store the last outputs,
and output signals are transmitted only for the bits that changed. This model is defined by:
{0,1} CIN {0,., 232-1 };
A sketch of this model was shown in the figure 2. As we can see, it uses three select lines (asel, bsel, and
csel) to choose two output registers and a register to be modified. An array of 32 integers (IN) keeps the
present values of the registers. The Boolean line cen (C enable line) is used to allow write operations. The
external transition function models the reception of an input. The function stores the desired operation according
to the signal received. Also, we store an input value in the number of register to be activated. A
new internal event is scheduled with a predefined delay, which models the circuit delay. If an external event
arrives before the end of the delay, the operation is cancelled.
Model &Regglob::externalFunction( const ExternalMessage &msg ) {
switch (msg.port()) {
case cen: line turned on
case reset: breset = (int)msg.value(); // Reset
Store the input lines
this->holdIn ( active, delay );
return *this;
Model &Regglob::internalFunction( const
if (breset) //
for (int i=0; i<255; i++) in[i]=0;
if (bcen)
for (int i=0; i<32; i++)
return *this ;
The i-eth line of the A input was enabled
Store the register number received
The i-eth line of the B input was enabled
Store the register number received
The i-eth line of the C input was enabled
Store the register number received
A reset signal was issued
// The 8 register (32 bit each) are deleted
// The write line was enabled
// Update the desired register
// Wait the next internal event
Model &Regglob::outputFunction( const InternalMessage &msg ) {
if (olda[i] != in[selecta*32+i]) {
this->sendOutput(msg.time(), aout,
if (oldb[i] != in[selectb*32+i]) {
this->sendOutput(msg.time(), bout,
return *this ;
// The register has changed
// Transmit it through the output line
// The register has changed
// Transmit it through the output line
Figure
9. RegGlob model definition: transition functions [50] .
The output function decides if the register has changed, querying olda and oldb which stores the previous
status of the A and B lines. When the register changes, its value is sent through the chosen output or B).
This model shows a more interesting use of the internal transition function. In this case, we are considering
the internal state to decide how the model must react. The internal transition function sees if the reset line is
activated. In that case, it clears the contents of every register. Then, if the cen line was activated, the value
of the chosen register is updated with the new input.
3.3. Other basic components
The architectural description is completed with other several DEVS models. We include generic aspects,
making a brief description of their behavior. We do not include the definition of the model's transition func-
tions, built as in the previous examples. Details of these models can be found in [49].
This circuit checks if the next window to be used will be overwritten. The component consists of a Window
Invalid Mask register. It returns the value of the CWP-eth bit of the WIM register.
{0,., 24-1} RD_WR {0,1} RESET {0, 1};
The memory is provided with three basic operations: read, write and reset. When a reset is issued, the
memory initial image is loaded. The processor writes an address in the bus, and signals the memory using
the AS signal when the address is ready. Then, a read/write signal is issued. The memory reacts according
with this signal, using an output after a time related with the memory latency.
The adder receives two inputs. Depending on the result, the Carry bit can be turned on.
These models are used to align data read/written during the Load/Store operations.
This model represents the behavior of the integer Arithmetic-Logic Unit. It is capable of executing the following
operations: add, sub, addx, subx (add/sub with carry), and, or, xor, andn, orn, xnor (negated and, or,
xor).
This group of models were included to provide the behavior of the most used Boolean gates: AND, OR,
NOT and XOR. They receive binary inputs, producing a result according to the desired operation.
AS, RD/WR, DTACK, ERR, RESET, BUSY {0,1};
The bus interprets each of the input signals, providing outputs related with them. If a device which received
a 1 in the BGRANTin port needs to write data in the Memory, it writes a 0 in the BGRANTout port (no
smaller priority device is able to use the bus). Then, the device starts a bus cycle turning on the BUSYsig-
nal. The device writes the address to be accessed in the ADDRESSlines, and the data to be written in
DATA. After, the Byte Select Mask BSELto define which byte in the word is used. Finally, it turns on the
RD/WRout and AS lines to tell a Write operation was issued. When the memory receives the AS signal, it
executes a memory cycle that finishes when the DTACKout line is turned on. The device that issued the
operation receives this signal in its DTACKin line. When the cycle has finished, if BGRANTin is still
in 1, the device is able to transfer new data. Otherwise, it turns off the BUSYline, allowing a new bus operation
by other device.
This model is used in conditional jumps to decide if a branch must be executed.
It represents the CPU clock, which period can be configured.
This model is used to decide if access to the global registers or the register window is required. It returns the
kind of register (Register window/Global) and its number.
This model updates the nPC.
This model manages the actions that take place when an interrupt is received. The PIL (Processor Interrupt
lines mask the interrupts. If one or more IRQs whose numbers are greater than the PIL are received,
the interrupt must be serviced. Then, we see which one has the higher priority, and the TF (Trap Found) bit
is turned on. The TT (Trap Type) register is loaded according to the highest level interrupt.
This model represents a processor register, implemented as a d-latch. The EIN line enable inputs, and the
CLEAR line resets the register to zero.
This model is in charge of making multiplication and divisions, turning on the condition bits.
These models represent 2 or 4 input multiplexers. To choose them, we receive the 4-bit select signal whose
bit turned on marks which input will be sent through the output.
This model is in charge of managing the Register Window.
This model is in charge of implementing a shifter.
These models extend the sign of an operand of 13 or 22 bits to 32 bits.
This component defines which trap must be serviced, based on a priority system. One of the input lines defines
a non-masked trap. Other 7 bits are used to receive the number of a trap that can be masked. The
model returns a bit telling if the trap must be serviced, and 8 bits telling the trap type. The following table
shows the kind and priorities for each trap available:
Line Description Priority Trap Type
INST_ACC_EXCEP Instruction access exception 5 0x01
ILLEG_INST Illegal instruction 7 0x02
PRIV_INST Privileged instruction 6 0x03
WIN_OVER Window overflow 9 0x05
WIN_UNDER Window underflow 9 0x06
ADDR_NOT_ALIGN Address not aligned 10 0x07
DATA_ACC_EXCEP Data access exception 13 0x09
INST_ACC_ERR Instruction access error 3 0x21
DATA_ACC_ERR Data access error 12 0x29
DIV_ZERO Division by zero 15 0x2A
DATA_ST_ERR Data store error 2 0x2B
Table
3. Available traps
According with this table, the model analyzes which is the higher priority trap to be serviced. After a delay,
it sends the corresponding index through the output ports.
3.4. Control Unit
The Control Unit is in charge of driving the execution flow of the processor. As explained earlier, this
model uses several input/output lines. According with the input received, it issues different outputs, activating
the different circuits that were defined previously. Here we show part of its behavior. The specification
of the input/output sets is not included, because of its size (details can be found in [49]).
Model &UC::externalFunction( const ExternalMessage &msg ) {
else this->passivate();
} else if( msg.port() == DTACK ) {
} else if( msg.port() == CCLOGIC ) {
else
{ string portName;
int portNum;
nameNum( msg.port().name(), portName, portNum );
portName == "ir"
else
return *this;
Model &UC::internalFunction( const InternalMessage & ) {
return *this;
Model &UC::outputFunction( const InternalMessage &msg ) {
.
// See if the c_en line must be activated
// Read the Instruction Register and decode the instruction
} else {
else
// See branches
else
// Transmit the outputs
return *this;
Figure
10. Control Unit: transition functions .
As we can see, this model is activated by the occurrence of a clock tick. In this case, we check if the Control
Unit is waiting a result coming from the memory (waitfmc). In that case, we have nothing to do and the
model passivates. Otherwise, we register that a clock tick has finished. Other external inputs correspond to
the signal DTACK coming from the memory or the CCLOGIC (that is, an input arriving from a register).
We also recognize inputs for the Instruction Register (to store a new instruction to execute) or to the PSR
(to update the condition codes). The internal transition function records that when the Address Strobe is
up, we are waiting the end of a memory transfer. The main tasks of the control unit are executed by the output
function. As we can see in the description, the present input values are queried. Depending on the number
of clock tick in the instruction cycle, different output lines are activated.
4. THE DIGITAL LOGIC LEVEL
The abstraction level of several models was further detailed, letting the students to analyze the digital logic
level of the circuits. In the previous stage, the behavior of these circuits was defined by using atomic mod-
els. In this case, some of these models were built using atomic models representing the basic Boolean gates
(AND, OR, NOT, XOR). These models (described in the previous section) were used as components that
were integrated using digital logic. A coupled model representing the complete circuit replaced the old
atomic ones. These modifications, also done in course assignments, show the extensibility and modifiability
of Alfa-1. Two of the models implemented this way will be explained following.
4.1. CMP model
The CMP is a part of the Address Unit that detects addresses falling out of the program boundaries. The
model receives two inputs (through the lines OPA and OPB that are connected to the BASE and LIMIT
registers). As a result it returns the signal EQ if both values are equal, or LWif A is lower than B.
Figure
11. Sketch of the Address Unit.
The model is composed of several one-bit comparators, and coupling n of them generates n-bit compara-
tors. The following figure shows the basic components of this building block:
Figure
12. One-bit comparator [50].
This model is formally described by:
select >
where each is an atomic defining the corresponding
building block, presented previously in section 3.3.;
I { AND_n_1 }; I
I { Self }; I { Self };
I { AND_n_2 }; I { Self, NOT_n_1, AND_n_1, XOR_n}; and
Zij is built using I, as described earlier, and
The definition of this coupled model using CD++ is presented in the following figure:
in : OPAn OPBn
Figure
13. CMP coupled model [50].
First, we define the components of the coupled model (corresponding to the D set). Then, the input/output
ports are included (which are related with the X/Y sets defined earlier). Finally, the links show the model
influencees (which define the translation function). The select function is implicitly defined by the order of
definition for the model components.
4.2. Chip Selector
The Chip Selector (CS) circuit is devoted to determine if an address is between two others. The model receives
a 32-bit address, an Address Strobe (AS), and it returns a Boolean value telling if the address is between
the boundaries.
The MASK models provide two 32-bit sets (MAX Mask, MIN Mask) containing the boundaries of the address
to be compared. These models, defined originally as latches, were redefined using Boolean gates. The
input address for the chip selector is checked using two comparators, instances of the model defined in the
previous section.
Figure
14. Sketch of the Chip Selector [50].
The result obtained is transmitted through the ports LW and EQ for each of the comparators. Both outputs
are ORed for the first register (as we are interested to see if CMP A MAX). After, the LW output of the
second register is inverted (as we are interested to see if CMP B MIN). If the circuit is enabled, the result
obtained is transmitted.
components: MASMAX@MAS MASMIN@MAS CMPA@CMP CMPB@CMP and1@AND and2@AND or@OR not@NOT
Link: A31@top OPA31@CMPA A31@top OPA31@CMPB Link: A30@top OPA30@CMPA A30@top OPA30@CMPB
Link: out31@MASMAX OPB31@CMPA out31@MASMIN OPB31@CMPB
Link: out30@MASMAX OPB30@CMPA out30@MASMIN OPB30@CMPB
Link: out0@MASMAX OPB0@CMPA out0@MASMIN OPB0@CMPB
Link: AS ina@and2
Link: eq@CMPA ina@or lw@CMPA inb@or
Link: out@or ina@and1 out@not inb@and1
Link: out@and1 inb@and2
Link: out@and2 CS@top
Figure
15. CS coupled model [50].
5. SIMULATION RESULTS
The present section shows the results obtained when some of the models previously presented are simu-
lated. In the first case, we show the results of a value of 20 incremented by the INC/DEC model. The figure
shows the model inputs with their timestamps and the output values obtained.
The first step consists of giving an initial value to the circuit (in zero by default). The first event
generates an output only when the model phase changes. As the preparation time for the circuit
is 5 time units, this occurs at 00:00:05:000. The second input does not generate changes in the model
and no output is issued. In simulated time 10, a new input is inserted through the port OP2. As this value
changed, an output is generated at simulated time 15. The following 2 inputs are not registered because the
circuit keeps its present state. The last one increments the value in the register by inserting the value
through the FCOD port. The incremented value can be seen 5 time units after.
INPUT OUTPUT
Figure
16. Inputs and Outputs for the INC/DEC model.
The following example shows the execution of the RegGlob model under different inputs. At the instant 0,
the C enable line is activated, allowing write operations in the register. In this case, the register 4 is selected
(csel2=1, csel1=0 and csel0=0), and the number 0xFFFFFFFF is used as input
ter, in 00:00:01:00, the register 2 is selected (csel2=0 and csel1=1 ), and the number 0x55555555 is input
(cin0=cin2=cin4.=cin30=1, and cin1= The first value is stored in the register
4, and the second in the register 2.
INPUT OUTPUT
reset 1
Figure
17. Inputs/Outputs of RegGlob [50].
At 00:00:02:00, C Enable is deactivated. Therefore, the following operations are devoted to read registers.
We see that the value in the register 4 is sent through the A output (asel=2) and the register 2 is sent
through B (bsel=1). As a result, the values previously loaded are transmitted (that is, 0xFFFFFFFF in A,
and 0x55555555 in B). After, Reset is activated. Now, we try to read the register 4 at 00:00:05:00, and we
obtain the value 0x00000000.
The next test corresponds to the TrapLogic model. Here we can see the result obtained after turning on all
the trap bits. Due to this, we expect obtaining the index of the highest priority trap that is pending. The result
obtained after the delay time corresponds to the highest one of section 3.4: Data Store Error (whose
code is the result we obtained). Also, the Trap Found flag is turned on.
inst_acc_excep / 1.000
illeg_inst / 1.000
priv_inst / 1.000
win_over / 1.000
win_under / 1.000
addr_not_align / 1.000
data_acc_excep / 1.000
inst_acc_err / 1.000
data_acc_err / 1.000
div_zero / 1.000
data_st_err / 1.000
trap_inst / 1.000
Figure
18. Execution results for the TrapLogic model [50] .
Finally, we show two execution examples that are part of a complete program. All the examples were executed
in a Pentium processor (133 MHz), using the Linux version of CD++. The average performance for
this model was one instruction per second. The source code was translated to binary using the GNU MASM
assembler Linker. The executable is used as the initial memory image for the simulator. The first part of the
following figure shows part of a program written in assembly language. The second part presents the binary
code generated, together with the addresses for each instruction or data (one word each).
set 0x12345678, %r1
st %r1, [dest]
sth %r1, [dest+4]
sth %r1, [dest+10]
stb %r1, [dest+12]
stb %r1, [dest+17]
stb %r1, [dest+22]
stb %r1, [dest+27]
unimp
Initial Image
Addr. Memory Image
Final image
Addr. Memory Image
Load the register 1 with 0x12345678
Store it in the "dest" variable
Store the high half-word
Store the last byte
Interpretation
01001000 Store the register 1 in the address 72
01001100 Store the high part of reg. 1 in address 76
01010010 Store the high part of reg. 1 in address 82
01010100 Store the high byte of reg. 1 in address 84
01011001 Store the high byte of reg. 1 in address 89
01011110 Store the high byte of reg. 1 in address 94
01100011 Store the high byte of reg. 1 in address 99
00000000 unimp
00100000 "dest" variable (20: space character)Values
Figure
19. Storing a value in memory .
As we can see, this piece of code copies parts of the number 0x12345678 to certain memory addresses. We
show the translation of the binary codes based on the specification of the instruction set of the SPARC
processor. Finally, we show the memory image after the program execution. As we can see, the values
stored in memory follow the instructions defined by the executable code.
The following example shows the execution of part of another program. As we can see, the goal is to place
a 1 in a given address, and then shift this value to the left, storing the result in the following address. The
cycle is repeated 12 times.
set 1, %r1
cycle: sll %r1, %r2, %r3
stb %r3, [%r2+dest]
subcc %r2, 12, %r0
bne cycle
inc 1, %r2 !Delay slot
unimp
Initial Image
Addr. Memory Image
036 10000111 00101000 01000000 00000010
Final image
Load the register 1
Shift the value the
Store the result in
Repeat the cycle 12
Interpretation
with the value 1
number of times in r2
the variable dest
times
set <1>, 1
Take the register 1, shift and store in R3
Store in address
substract 12 to R0
Relative jump to address -2 words (40)
increment 1
unimp
Destination variable
A value 0x01 shifted 12 times
Figure
20. Shifting and storing results in memory .
Once the basic behavior of the simulated computer was verified, a thorough integration test was attacked.
As explained earlier, each circuit was defined together with a set of input/output values that were encapsulated
in an experimental framework. Once each of the models was tested, each operation in the instruction
set was checked. The procedure was developed using the verification facilities of DEVS, defining 17100
test cases. The mechanism consisted in creating an experimental framework, which executed an instruction
in the instruction set. The execution result was stored in memory, and a memory dump was executed, obtaining
the memory state after the execution. This value is checked against the value obtained when the
same program is executed in the real architecture, which is included in the testing experimental framework.
This procedure allowed us to find some errors derived of the coupled model. For instance, we could see that
the division instruction was not working properly. The generated test included the following sentences:
set 274543375, %r24 ! stores a value in register 24
set 13908050 , %r22 ! a second value is stored in the register 22
both values are divided and stored in r10
st %r10 , [dest] ! The result is stored in memory
unimp
value: .ascii "VALUE:"
dest: .word FFFFFFFF ! Result of Test 100
Figure
21. Testing routine for the UDIV instruction.
When this example was executed, the testing coupled model found an error:
Field Type Expected Found DIF
In this case, the destination should have stored the value 19 (274543375 divided by 13908050). Instead, we
have found the value 1, allowing us to see that one of the instructions had an unexpected behavior. In this
way we could find errors in some of the instructions that could be fixed. We also found errors in some addition
instructions, and in conditional jumps with prediction.
Finally, we show part of the execution of the simulator for the example presented in Figure 20. We show a
log file including the messages interchanged between modules. As in other DEVS frameworks, there are
four kinds of messages: * (used to signal a state change due to an internal event), X (used when an external
event arrives), Y (the model's output) and done (indicating that a model has finished with its task). The I
messages initialize the corresponding models. For each message we show its type, timestamp, value, ori-
gin/destination, and the port used for the transmission.
Message I / 00:00:00:000 /
Message I / 00:00:00:000 /
Message I / 00:00:00:000 /
Message I / 00:00:00:000 /
Message I / 00:00:00:000 /
Message I / 00:00:00:000 /
Message I / 00:00:00:000 /
Message D / 00:00:00:000 /
Message D / 00:00:00:000 /
Message D / 00:00:00:000 /
Message I / 00:00:00:000 /
Message I / 00:00:00:000 /
Message I / 00:00:00:000 /
Message * / 00:00:00:000 /
Message * / 00:00:00:000 /
Message * / 00:00:00:000 /
Message Y / 00:00:00:000 /
Message Y / 00:00:00:000 /
Message D / 00:00:00:000 /
Message X / 00:00:00:000 /
Message X / 00:00:00:000 /
Message X / 00:00:00:000 /
Message X / 00:00:00:000 /
Message D / 00:00:00:000 /
Message D / 00:00:00:000 /
Message D / 00:00:00:000 /
Message D / 00:00:00:000 /
Message D / 00:00:00:000 /
Message D / 00:00:00:000 /
Message * / 00:00:00:000 /
Message * / 00:00:00:000 /
Message * / 00:00:00:000 /
Message Y / 00:00:00:000 /
Message D / 00:00:00:000 /
Message * / 00:00:00:000 /
Message * / 00:00:00:000 /
Message * / 00:00:00:000 /
Message Y / 00:00:00:000 /
Message D / 00:00:00:000 /
Message X / 00:00:00:000 /
Message D / 00:00:00:000 /
Message D / 00:00:00:000 /
Message D / 00:00:00:000 /
Message * / 00:00:00:000 /
Message * / 00:00:00:000 /
Message * / 00:00:00:000 /
Message Y / 00:00:00:000 /
Message Y / 00:00:00:000 /
Message Y / 00:00:00:000 /
Message Y / 00:00:00:000 /
Message Y / 00:00:00:000 /
Message Y / 00:00:00:000 /
Message Y / 00:00:00:000 /
Root(00) to top(01) //
top(01) to mem(02) //
top(01) to bus(03)
top(01) to csmem(04)
top(01) to cpu(05)
top(01) to c1(64)
top(01) to dpc(65)
mem(02) / . to top(01) //
bus(03) / . to top(01) //
. to top(01)
cpu(05) to ir(06) //
cpu(05) to pc_add(07)
cpu(05) to pc_mux(08)
Initialize the higher level
components: memory, bus, CS, etc.
The models reply the next
scheduled event
The CPU initializes the components
Root(00) to top(01)
top(01) to cpu(05)
cpu(05) to npc(10) // Take the nPC
. to cpu(05)
1.000 to pc_latch(11) // Send to pc-inc
to pc_inc(13) // to increment
to pc_latch(11) // the value
Schedule the
activation of the
to cpu(05) // pc-inc model
Root(00) to top(01)
top(01) to cpu(05)
cpu(05) to pc(12)
to cpu(05) // Initial address
pc(12) / . to cpu(05) //
Root(00) to top(01)
top(01) to cpu(05)
cpu(05) to clock(45) // Clock tick
clock(45) / clck / 1.000 to cpu(05)
clock(45) / 00:01:00:000 to cpu(05)
clck / 1.000 to cu(43)
Root(00) to top(01)
top(01) to cpu(05) // Arrival to the CU
cpu(05) to cu(43) // And activation of the components
cu(43) / as / 1.000 to cpu(05)
Message Y / 00:00:00:000 /
Message Y / 00:00:00:000 /
Message Y / 00:00:00:000 /
Message D / 00:00:00:000 /
Message * / 00:00:10:000 /
Message * / 00:00:10:000 /
Message * / 00:00:10:000 /
Message D / 00:00:10:000 /
Message D / 00:00:10:000 /
Message D / 00:00:10:000 /
Message * / 00:00:10:000 /
Message * / 00:00:10:000 /
Message * / 00:00:10:000 /
Message Y / 00:00:10:000 /
Message Y / 00:00:10:000 /
Message D / 00:00:10:000 /
Message * / 00:00:20:001 /
Message * / 00:00:20:001 /
Message Y / 00:00:20:001 /
Message Y / 00:00:20:001 /
Message Y / 00:00:20:001 /
Message Y / 00:00:20:001 /
Message Y / 00:00:20:001 /
Message Y / 00:00:20:001 /
Message D / 00:00:20:001 /
Message X / 00:00:20:001 /
Message X / 00:00:20:001 /
Message X / 00:00:20:001 /
Message X / 00:00:20:001 /
Message X / 00:00:20:001 /
Message X / 00:00:20:001 /
Message D / 00:00:20:001 /
cu(43) / . to cpu(05)
Root(00) to top(01)
top(01) to cpu(05)
cpu(05) to pc_latch(11)
. to cpu(05)
Root(00) to top(01)
top(01) to cpu(05)
cpu(05) to pc_inc(13) // Update the nPC
. to cpu(05)
Root(00) to top(01)
top(01) to mem(02) // Memory returns the first instr.
mem(02) / dtack / 1.000 to top(01)
mem(02) / . to top(01)
Figure
22. Log file of a simple routine .
The execution cycle starts by initializing the higher level models (memory, CPU, etc. The message arrived
to the CPU model is sent to its lower level components: Instruction Register, PC Adder, PC multiplexer,
Control Unit, etc.
When the initialization cycle finishes, the imminent model is executed. In this case, the nPC model is acti-
vated, transmitting the address of the next instruction. As we can see, the 2nd and 5th bits are returned with a
value. That means that the nPC value is (as we see in figure 20, the program starts in the
address 32). The value is sent to the pc-inc model, in charge of adding 4 to this register. The update is finished
in 10:000, as the activation time of this model was scheduled using the circuit delay. At that moment,
a 4 value is added to the nPC, and we obtain the 3rd and 5th bits in 1 (res3 and res5), that is,
the next PC. After, the PC is activated and the value 010000 (that is, 32) is obtained. This is the initial address
of the program. The following event is the arrival of a clock tick, sent to the processor. The CPU
schedules the next tick (in 1:00:000 time units) and transmits the signal to the Control Unit, which activates
several components: a-mux, ALU, Addr-mux, IR,etc.
We finally see, in the simulated time 20:000, that the memory has returned the first instruction (compare the
results with the bit configuration in the address 32). The instruction is sent to the CPU to be stored in the
Instruction Register and to follow with the execution. The rest of the instruction cycle is completed in the
same way.
We are able to follow the execution flow of any program by analyzing this log file. To simplify the analysis
of results, we built a set of scripts using Tk, that lets the students to choose which components should be
considered. In this way, behavior of each of the subcomponents can be followed more easily, and the students
can analyze the behavior of the desired subsystem in detail.
6. CONCLUSION
We have presented the use of DEVS to simulate a simple computer. The models were based on the architecture
of the SPARC processor, which includes features not existing in simpler CPUs. The tools can be
used in Computer Organization courses to analyze and understand the basic behavior of the different levels
of a computer system. The interaction between levels can be studied, and experimental evaluation of the
system can be done.
The use of DEVS allowed us to have reusable models (in this case, Boolean gates, comparators, multiplex-
ers, latches, etc. DEVS also allowed us to provide reusable code for different configurations. We provided
different machines, one running the digital logic level and the other the instruction set, with different performance
in each case, depending on the educational needs. The concept of internal transition functions can
be used to improve the definition of the timing properties of each component, letting to define complex
synchronization mechanisms. Nevertheless, in this case, most timing delays were represented as simple in-
put/output relations.
We have met all the goals proposed. Alfa-1 is public domain, and has been developed using CD++ (which
is also public domain, and it was built using GNU C++). Therefore, the toolkit is available for its use in
most existing Computer Organization courses. We described several levels of the architecture (from the
Digital Logic level up to the Instruction Set). The assembly language level was also attacked using public
domain assemblers that generated executable code that could run in Alfa-1. We have easily extended the
components (for instance, including a cache memory that was not included in the first versions). We also
have modified existing components (implementing, for instance, Digital Logic versions of some of the cir-
cuits). Thorough testing could be done using an approach based on the construction of experimental frameworks
associated with testing functions. An experimental framework was also built for the final integrated
model.
The most important achievements were related with our educational goals. The whole project was designed
with detail as an assignment in a 3rd year Discrete Event Simulation course. The models were formally
specified, and the specifications were used by students in a Computer Organization course to build the final
version of the architecture. These students had only taken previous prerequisite courses in programming.
With only this knowledge the students were able to build all the components here presented. Final integration
was planned by a group of undergraduate Teaching Assistants (which also developed the Control Unit
and a coupled model representing the whole architecture showed in the Figure 1). Individual and integration
testing was also done by 2nd year students. Several of the modifications showed here were developed as
course assignments. These facts show the feasibility of the approach from a pedagogical point of view. Upper
level courses reported higher success rates and detailed knowledge of the subjects after using Alfa-1.
The tools can be obtained at "http://www.sce.carleton.ca/wainer/usenix". Different experiences can be attacked
using this toolkit. In the assembly language level, the students can use existing assemblers to build
executables that run in the simulator. A complete analysis of the execution flow at the instruction level can
be achieved by tracing the execution in the log file. The students can study the flow of a program and each
instruction with detail, starting from the memory image of an executable. The instruction cycle and signal
flow in the datapath can be easily inspected. Going deeper, we can see the behavior of those circuits implemented
in the digital logic level.
By extending or changing the existing instructions, and implementing the changes in the Control Unit, the
students can experience the design of instruction sets. This allows them to have practice in instruction en-
coding, and to relate instruction definition with the underlying architecture. The students also can include
new components (as it was showed with the cache memory example), change existing ones, or implement
them using digital logic.
The hierarchical nature of DEVS provides means to go deeper in the hierarchy. For instance, the logical
gates could be implemented defining the transistor level (which has not been implemented in this version).
We planned to build an Assembler and Linker, but the code generated by those provided by GNU for
SPARC plattforms executed straightforward. Nevertheless, the implementation of an assembler and linker
are interesting assignments that can be faced to complete the layered view applied in these courses. Also, a
debugger for the Alfa-1 architecture could be built, making easier to study the assembly language level.
At present, Alfa-1 is being extended by defining components of the input/output subsystem. Several in-
put/output devices, interfaces and DMA controllers will be simulated. Different transference techniques
(polling, interrupts, DMA) will be considered. Likewise, the implementation of different cache management
algorithms is being finished. Other tasks faced at present include the definition of a graphical interface to
enhance the use of the toolkit. The set of scripts mentioned in section 5 will be used to gather the results of
the simulations, and will be used as inputs to be displayed in graphical way. In this way, the study and
analysis of the different subsystems will be improved.
7.
ACKNOWLEDGEMENTS
I want to thank to the anonymous referees the detailed comments they made to this article. I also thank Prof.
Trevor Pearce at SCE, Carleton University, for his help with the final version. Sergio Zlotnik collaborated
in the early stage of this project, presented earlier in [51]. The research was partially funded by the Usenix
foundation and UBACYT Project JW10. It was developed while Gabriel Wainer was an Assistant Professor
at the Computer Sciences Dept. of the Universidad de Buenos Aires in Argentina.
--R
"Computer Architecture: a quantitative approach"
"Computer Organization and Design: the Hardware/Software Interface"
"Computer Organization and Architecture"
"Computer Systems Design and Architecture"
"Structured Computer Organization"
"The SimpleScalar Tool Set. Version 2.0"
"An interactive environment for the teaching of computer architecture"
"Notes on p86 Assembly Language and Assembling"
"An extensible simulator for the Intel 80x86 processor family"
"The MPS Computer System Simulator"
"The PROVIR Virtual Processor"
"Performance simulation of an ALPHA microprocessor"
"Talisman: fast and accurate multicomputer simulation"
"A Block-Oriented Network Simulator (BONeS)"
"Improved parallel architectural simulations on shared-memory multiprocessors"
"Experiences in simulating a declarative multiprocessor"
"Microprocessor Architecture Design with ATLAS."
"Designing Efficient Simulations Using Maisie"
HARMAN, T. "Mastering Simulink 4"
"ACSL Reference Manual"
COMPANY. "MODSIM II, The Language for Object-Oriented Programming"
"SIMSCRIPT: A Simulation Programming Language"
"Simulation Model Design and Execution"
"Hardware Description Languages: Concepts and Principles"
"The Verilog Hardware Description Language"
"Systems Engineering with SDL"
"ALFA-0: a simulated computer as an educational tool for Computer Organization"
"CRAPS: an emulator for the SPARC processor"
"An emulator of the Atari processor"
"Object-oriented simulation with hierarchical modular models"
"DEVS Theory of Quantization"
"New Extensions to the CD++ tool"
--TR
Object-oriented simulation with hierarchical, modular models: intelligent agents and endomorphic systems
Structured computer organization (3rd ed.)
Introduction to simulation with GPSS
Improved parallel architectural simulations on shared-memory multiprocessors
Execution-driven simulation of multiprocessors
Talisman
SM-prof
Computer organization and architecture (4th ed.)
An interactive environment for the teaching of computer architecture
Using the SimOS machine simulator to study complex computer systems
The SimpleScalar tool set, version 2.0
Performance and dependability evaluation of scalable massively parallel computer systems with conjoint simulation
Iterative design of efficient simulations using Maisie
GDEVS
Hardware Description Languages
Computer Systems Design and Architecture
Simulation Model Design and Execution
Mastering SIMULINK 4
Computer Architecture; A Quantitative Approach
The VERILOG Hardware Description Language
Theory of Modeling and Simulation
Performance Simulation of an Alpha Microprocessor
Microprocessor Architecture Design with ATLAS
The Augmint multiprocessor simulation toolkit for Intel x86 architectures
Experiences in simulating a declarative multiprocessor
Using the DEVS Paradigm to Implement a Simulated Processor
Documentation for the CHIP Computer System (Version 1.1)
PROTEUS: A HIGH-PERFORMANCE PARALLEL-ARCHITECTURE SIMULATOR
The MPS Computer System Simulator | simulation in education;computer organization;applications of DEVS methodology;DEVS models |