index
int64
0
18.8k
text
stringlengths
0
826k
year
stringclasses
38 values
No
stringlengths
1
4
100
THE ROLE OF LOGIC IN KNOWLEDGE REPRESENTATION AND COMMONSENSE REASONING* Robert C. Moore Artificial Intelligence Center SRI International, Menlo Park, California 94025 ARSTRACT This paper examines the role that formal logic ought to play in representing and reasoning with commonsense knowledge. We take issue with the commonly held view (as expressed by Newell [1980]) that the use of representations based on formal logic is inappropriate in most applications of artificial intelligence. We argue to the contrary that there is an important set of issues, involving incomplete knowledge of a problem situation, that so far have been addressed only by systems based on formal logic and deductive inference, and that, in some sense, probably can be dealt with only by systems based on logic and deduction. We further argue that the experiments of the late 1960s on problem-solving by theorem- proving did not show that the use of logic and deduction in AI systems was necessarily inefficient, but rather that what was needed was better control of the deduction process, combined with more attention to the computational properties of axioms. I INTRODUCTION In his AAAI presidential address, Allen Newell [1980] presented his view of the role that logic ought to play in representing and reasoning with commonsense knowledge. Probably the most concise summary of that view is his proposition that "the role of logic [is] as a tool for the analysis of knowledge, not for reasoning by intelligent agents" [PO 161. What I understand Newell to be saying is that, while logic provides an appropriate framework for analyzing the meaning of expressions in representation formalisms and --B----B * Preparation of this paper was supported by the Defense Advanced Research Projects Agency under Contract F/00039-80-C-0575 with the Naval Electronic Systems Command. The views and conclusions contained in this paper are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency of the United States Government. judging the validity of inferences, logical languages are themselves not particularly good formalisms for representing knowledge, nor is the application of rules of inference to logical formulas a particularly good method for commonsense reasoning. As to the first part of this position, I could not agree more. Whatever else a formalism may be, at least some of its expressions must have referential semantics if the formalism is really to be a representation of knowledge. That is, there must be some sort of correspondence between an expression and the world, such that it makes sense to ask whether the world is the way the expression claims it to be. To have knowledge at all is to have knowledge ** that the world is one way and not otherwise. If one-s "knowledge" does not rule out any possibilities for how the world might be, then one really does not know anything at all. Moreover, whatever AI researchers may say, examination of their actual practice reveals that they do rely (at least informally) on being able to provide referential semantics for their formalisms. Whether we are dealing with conceptual dependencies, frames, semantic networks, or what have you, as soon as we say that a particular piece of structure represents the assertion (or belief, or knowledge) that John hit Mary, we have hold of something that is true if John did hit Mary and false if he didn't. Now, mathematical logic (especially model theory) is simply the branch of mathematics that deals with this sort of relationship between expressions and the world. If one is going to provide an analysis of the referential semantics of a representation formalism, then, a fortiori, one is going to be engaged in logic.- As Newell puts it [P* 171, “Just as talking of programmerless programming violates truth in packaging, so does talking of a non-logical analysis of knowledge.*' It may be objected that Newell and I are both overgeneralizing in defining logic so broadly as to include all possible methods for addressing this issue, but the fact remains that the only existing tools for this kind of semantic analysis have come from logic. I know this view is very controversial in AI, but I will -----s- ** or at least a belief; most people in AI don't seem overly concerned about truth in the actual world. 428 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. not argue the point any further for two reasons. First, it has already been argued quite eloquently by Pat Hayes [1977], and second, I want to go on to those areas where I disagree with Newell. II WHAT IS A LOGICAL REPRESENTATION? The question of what it means to use a logic for representing knowledge in a computer system is less straightforward than it might seem. In mathematics and philosophy, a logic is a language- -i.e., a set of formulas--with either a formal inference system or a formal semantics (or both). To use a logic in a computer system, we have to encode those formulas somehow as computer data structures. If the formulas are in "Cambridge Polish" notation, e.g., The main point on which I take issue with Newell is his conclusion that 1 ogical languages and deductive inference are not very useful tools for implementing (as opposed to analyzing) systems capable of commonsense reasoning. Newell does not present any real argument in support of this position, but instead says [p. 171 "The lessons of the sixties us something about the limitations of using logics for this role." In my view, Newell has seriously misread the lessons of (EVERY x (IMPLIES (MAN x> (MORTAL x))), the sixties with regard to this issue. It appears to me that features of commonsense a number of important reasoning can be we may be tempted to assume that the corresponding LISP S-expression must be the data structure that represents the formula in the computer. This is in fact the case in many systems, but using more sophisticated data structures certainly does not mean that we are not implementing a logical representation. For example, Sickel [ 19761 describes a theorem-proving system in which a collection of formulas is represented by a graph, where each node represents a formula, and each link represents a possible unification (i.e., pattern match) of two formulas, with the resulting substitution being stored on the link. Furthermore, Sickel notes that the topology of the graph, plus the substitutions associated with the links, carries all the information needed by the theorem-prover--so the actual structure of the formulas is not explicitly represented at all! within a logical framework. implemented only Consider the following problem, adapted from Moore [1975, p. 281. Three blocks, A, B, and C, are arranged as shown: I --I l----I I---I ) A ;I B ;I C I I - - - A is green, C is blue, and the color of B iS unstated In this arrangement of blocks, i s there a green block next to a block that is not green? It should be clear with no more than a moment's reflection that the answer is "yes. *' If B is green, it is a green block next to the nongreen block C; if B is not green then A is a green block next to the nongreen block B. This example suggests that deficiencies attributed to logical representations may be artifacts of naive implementations and do not necessarily carry over when more sophisticated techniques are used. For instance, one of the most frequently claimed advantages of semantic nets over logic as a representation formalism is that the links in the semantic net make it easier to retrieve information relevant to a particular problem. Sickel's system (along with that of Kowalski [1975]) would seem to be at least as good as most semantic net formalisms in this respect. In fact, it may even be better, since following a link in a semantic net usually does not guarantee that the subsequently attempted pattern match will succeed, while in Sickel's or Kowalski's system, it does. How is a person able to solve this problem? What sort of reasoning mechanisms are required? At least three distinctly "logical" factors seem to be involved: (1) the ability to see that an existentially quantified proposition without knowing exactly which object is true, makes it true, (2) ,the ability to recognize that, for a particular proposition, either it or its negation must be true, and (3) the ability to reason by cases. So far as I know, none of these abilities are possessed by any AI system not explicitly based on formal logic. Moreover, I would claim that, in a strong sense, these issues can be addressed only by systems that are based on formal logic. To justify this claim we will need to examine what it means to say that a system uses a logical representation or that it reasons by deductive inference. Then we will try to re-evaluate what was actually shown by the disappointing results of the early experiments on problem-solving by theorem-proving, which we must do if the arguments presented here are correct and if there is to be any hope of creating systems with commonsense reasoning abilities comparable to those possessed by human beings. Given that the relationship between a logical formula and its computer implementation can be as abstract as it is in Sickel's system, it seems doubtful to me that we could give any clear criteria for deciding whether a particular system really implements a logical representation. I think that the best way out of this dilemma is to give up trying to draw a line between logical and -------- * For example, for several decades there were formal inference systems for modal logic [Hughes and Cresswell, 19681, but no semantics; Montague's [1974] intensional logic has a formal semantics, but no inference system. 429 nonlogical logical representations, and instead ask what 19751 [Minsky, 1974, Appendix]. That assault, in my view, was tremendously detrimental to serious research on knowledge representation and commonsense reasoning and represents the position I primarily want to argue against. The major reason I regard the features of first-order logic as essential to any general- purpose representation formalism is that they are applicable to expressing knowledge about any domain. That is, it doesn't really matter what part of the world we are talking about; it always may be the case that we have only partial knowledge of a situation and we need some of these logical features to express or reason with that knowledge. This can be seen in the example presented in Section I. Reasoning about the position and color of blocks is certainly no more inherently logical than reasoning about anything else. The logical complexity of the problem comes from the fact that we are asked whether any blocks satisfy a given condition, but not which ones, and that we don't know the color of the middle block. If we had a complete description of the situation- -if we were told the color of the middle block--we could just "read off" the answer to the question from the problem description without doing any reasoning at all. Similar situations can easily arise in more practical domains as well. For instance, in determining a course of treatment, a physician may not need to decide between two possible diagnoses, either because the treatment is the same in either case or because only one of the two is treatable at all. Now, as far as I know, none of the inference methods currently being used in expert systems for medical diagnosis are capable of doing the sort of general reasoning by cases that ultimately justifies the physician's actions in such situations. Some systems have ad hoc rules -- or procedures for these special cases, but the creators of the systems have themselves had to carry out the relevant instances of reasoning by cases, because the systems are unable to. But this means that, in any situation the system designers failed to anticipate, the systems will fail if reasoning by cases is needed. It seems, though, that the practical utility of systems capable of handling only special cases has created a false impression that expert systems have no need for this kind of logic. To return to the main issue, I simply do not know what it would mean for a system to use a nonlogical representation of a disjunctive assertion or to use a nonlogical inference technique for reasoning bx cases. It seems to me that, to the extent any representation formalism has the logical features discussed above, it is a logic, and that to the extent a reasoxng procedure takes account of those features, it reasons deductively. It is conceivable that there might be a way of dealing with these issues that is radically different from current logics, but it would still be some sort of logic and, in any event, at the present time none of the systems that are even superficially different from features particular representation formalisms posses. we adopt this point of view, the next question to ask is what logical features are needed in a general-purpose representation formalism. My answer is that, at a features of first:order minimum, we need all the classical logic with equality. Perhaps the most basic feature of first-order logic is that it describes the world in terms of objects and their properties and relations. I doubt that anyone in AI could really complain about this, as virtually all AI representation formalisms- make use of these concepts. It might be argued that one needs more than just objects, properties, and relations as primitive notions, but it should be kept in mind that first-order logic places no limits on what can be regarded as an object. worlds, and objects--can individuals. Times, events, kinds, organizations, sentences--not just concrete physical all be treated as -logical Furthermore, even if "nonstandard" features we decide we such as higher-order need or intensional operators, we can still incorporate them within a logical framework. For me, however, it iS not the basic and "metaphysical" notions of object, property, relation that are the essential features of logic as a representation formalism, kinds of assertions but rather the that logic lets us make about them. Most of the features of logic can be seen as addressing the problem of how to describe an incompletely known situation. Specifically: existential quantification allows us to say that something has a certain property without having to know which thing has that property. Universal quantification allows us to say that everything in a certain class has a certain property without having to know what everything in that class is. Disjunction allows us to say that at least one of two statements is true without having to know which statement is true. Negation allows us to distinguish between knowing that a statement is not true and not knowing that it is true. Finally, logic lets us use different referring expressions without knowing whether they refer to the same object, but provides us with the equality predicate to assert explicitly whether or not they do. One way that logic has been criticized is not to claim that the above features are unnecessary or harmful, but rather to argue that logic lacks some other essential feature--for instance, the ability to express control information. This was the basis of the early MIT-led criticism of theorem-proving research (e.g., [Winograd, 1972, Chapter 61)s which was, I believe, largely justified. This sort of problem, however, can be addressed and, in fact, has been [Hayes, 19731 [McDermott, 19781 [Kowalski, 19791 [Moore, 19751 by extending logic in various ways (see Section III), rather than by throwing it out and starting over. Moreover, the criticism quickly turned into a much more radical attack on any use of logic or deduction at all in AI [Hewitt, 19731 [Hewitt, 430 standard at all. logics have any way of dealing with them either the assertion 9, given the goal P, given the goal Q. the assertion P, or Furthermore, the idea that one can get by with only special-purpose deduction systems doesn't seem very plausible to me either. No one in the world is an expert at reasoning about a block whose color is unknown between two blocks whose color is known, yet anyone can see the answer to the problem in Section I. Intelligence entails being able to cope with novelty, and sometimes what is novel about a situation is the logical structure of what we know about it. III WHY DID EARLY EXPERIMENTS FAIL? The bad reputation that logic has suffered from in AI circles for the past decade or so stems from attempts in the late 1960s to use general- purpose theorem-proving algorithms as universal problem-solvers. The idea was to axiomatize a problem situation in first-order logic and express the problem to be solved as a theorem to be proved from the axioms, usually by applying the resolution method developed by Robinson [1965]. The results of these experiments were disappointing. The difficulty was that, in the general case, the search space generated by the resolution method grows exponentially (or worse) with the number of formulas used to describe a problem, so that problems of even moderate complexity could not be solved in reasonable time. Several domain-independent heuristics were proposed to try to deal with this issue, but they proved too weak to produce satisfactory results. The lesson that was generally drawn from this experience was that any attempt to use logic or deduction in AI systems would be hopelessly inefficient. But, if the arguments made here are correct, there are certain issues in commonsense reasoning that can be addressed only by using logic and deduction, so we would seem to be at an impasse. A more careful analysis, however, suggests that the failure of the early attempts to do commonsense reasoning and problem-solving by theorem-proving had more specific causes that can be attacked without discarding logic itself. Some early theorem-proving systems utilized every implication both ways, leading to highly redundant searches. Further research produced more sophisticated methods that avoid some of these redundancies. Eliminating redundancies, however, creates choices as to which way assertions are to be used. In the systems that attempted to use only domain-independent control heuristics, a uniform strategy had to be imposed. Often the strategy was to use all assertions only in a backward-chaining manner, on the grounds that this would at least guarantee that all the inferences drawn would be relevant to the problem at hand. The difficulty with this approach is that the question of whether it is more efficient to use an assertion for forward or backward chaining can depend on the specific form of that assertion. Consider, for instance, the schema (EVERY x (IMPLIES (P (F x)) (P x))) Instances of this schema include such things as: (EVERY x (IMPLIES (JEWISH (MOTHER x)) (JEWISH X))) (EVERY X (IMPLIES (LESSP (SUCCESSOR X) Y) (LESSP X Y))) That is, is Jewish, 2 person is Jewish if his or her mother and a number X is less than a number Y if the successor of X is less than Y. Suppose we were to try to use an assertion of this form for backward chaining, as most "uniform" proof procedures would. It would apply to any goal of the form (P X) and produce the subgoal (P (F X>>. This expression, however, is also of the form (P X), so the process would be repeated, resulting in an infinite descending chain of subgoals: GOAL: (P X) GOAL: (P (F X)) GOAL: (P (F (F X))) GOAL: (P (F (F (F X)))), etc. I believe that the earliest of the MIT criticisms was in fact the correct one, that there is nothing particularly wrong with using logic or deduction per se, but that a system must have some way of knowing which inferences it should make out of the many possible alternatives. A very simple, but nonetheless important, instance of this is deciding whether to use implicative assertions in a forward-chaining or backward-chaining manner. The deductive process. can be thought of as a bidirectional search, partly working forward from premises to conclusions, partly working backward from goals to subgoals, and converging somewhere in the middle. Thus, if we have an assertion of the form (P -> Q), we can use it to generate If, on the other hand, we use the rule for forward chaining, the number of applications is limited by the complexity of the assertion that originally triggers the inference: ASSERT: (P (F (F X))) ASSERT: (P (F X)) ASSERT: (P X) It turns out, then, that the efficent use of a particular assertion often depends on exactly what that assertion is, as well as on the context of other assertions in which it is embedded. -----B-e * I am indebted to Richard Waldinger for suggesting this example. 431 Other examples illustrating this point are given by Kowalski [1979] and Moore [1975], involving not only the forward/backward-chaining distinction, but other control decisions as well. Since specific control information needs to be associated with particular assertions, the question arises as to how to provide it. The simplest way is to embed it in the assertions themselves. For instance, the forwardlbackward- chaining distinction can be encoded by having two versions of implication--e.g., (P -> Q) to indicate forward chaining and (Q <- P) to indicate backward chaining. This approach originated in the distinction made in the programming language PLANNER between antecedent and consequent theorems. A more sophisticated approach is to make decisions such as whether to use an assertion in the forward or backward direction themselves questions for the deduction system to reason about using "metalevel" knowledge. The first detailed proposal along these lines seems to have been made by Hayes [1973], while experimental systems have been built by McDermott [1978] and de Kleer et al. [19791, among others. Another factor that can greatly influence the efficiency of deductive reasoning is the exact way in which a body of knowledge is formalized. That is, logically equivalent formalizations can have radically different behavior when used with standard deduction techniques. For example, we could define ABOVE as tke transitive closure of ON in at least three ways: (FVERY (X Y) (IFF (ABOVE X Y) (OR (ON X Y) (SOME z (AND (ON x z) (ABOVE Z Y))>))) (EVERY (x Y) (IFF (ABOVE X Y) (OR (ON X Y) (SOME z (AND (ON z Y) (ABOVE X Z)>)))> (EVERY (x Y) (IFF (ABOVE X Y) (OR (ON X Y) (SOME Z (AND (ABOVE X Z) (ABOVE Z Y)))))> Each of these axioms will produce different behavior in a standard deduction system, no matter how we make such local control decisions as whether to use forward or backward chaining. The first axiom defines ABOVE in terms of ON, in effect, by iterating upward from the lower object, and would therefore be useful for enumerating all -----m-e * These formalizations are not quite equivalent, as they allow for different possible interpretations of ABOVE if infinitely many objects are involved. They are equivalent, however, if only a finite set of objects is being considered. the objects that are above a given object. The second axiom iterates downward from the upper object, and could be used for enumerating all the objects that a given object is above. The third axiom, though, is essentially a "middle out" definition, and is hard to control for any specific use. The early systems for problem-solving by theorem-proving were often inefficient because axioms were chosen for their simplicity and brevity, without regard to their computational properties--a problem that also arises in conventional programming. To take a well-known example, the simplest LISP program for computing the nth Fibonacci number is a doubly recursive procedure that takes O(zn) steps to execute, while a sligthly more complicated and less intuitively defined singly recursive procedure can compute the same function in O(n) steps. Kowalski [1974] was perhaps the first to note that choosing among alternatives such as these involves very much the same sort of decisions as are made in conventional programming. In fact, he observed that there are ways to formalize many functions and relations so that the application fo standard deduction methods will have the effect of executing them as efficient computer programs. These observations have led to the development of the field of "logic programming" [Kowalski, 19791 and the creation of new computer languages such as PROLOG [Warren and Pereira, 19771. IV SUMMARY AND CONCLUSIONS In this paper, I have tried to argue that there is an important class of problems in knowledge representation and commonsense reasoning, involving incomplete knowledge of a problem situation, that so far have been addressed only by systems based on formal logic and deductive inference, and that, in some sense, probably can be dealt with only by systems based on logic and deduction. I have further argued that, contrary to the conventional wisdom in AI, the experiments of the late 1960s did not show that the use of logic and deduction in AI systems was necessarily inefficient, but only that better control of the deduction process was needed, along with more attention to the computational properties of axioms. I would certainly not claim that all the problems of deductive inference can be solved simply by following the prescriptions of this paper. Further research will undoubtedly uncover as yet undiagnosed difficulties and, one hopes, their solutions. MY objective here is to encourage consideration of these problems, which have been ignored for a decade by most of the artificial-intelligence community, so that at future conferences we may hear about their solution rather than just their existence. ACKNOWLEDGMENTS Kowalski, R. [1979] Logic for Problem Solving ---- (Elsevier North Holland, Inc., New York, New York, 1979). I wish to thank Nils Nilsson and Bill Woods for helpful comments on previous versions of this paper. REFERENCES de Kleer, J. et al. [1979] "Explicit Control of Reasoning," in Artificial Intelligence: An MIT Perspective, Vol. 1, P. H. Winston ax R. H. Brown, eds., pp. 93-116 (The MIT Press, Cambridge, Massachusetts, 1979). Hayes, P. J. [1973] "Computation and Deduction," Proc. 2nd Symposium on Mathematical -- Foundations of Computer SciGce, Czechoslovak Academy of Sciences, pp. 105-116 (September 1973). Hayes, P. J. [1977] "In Defence of Logic," Proc. Fifth International Joint Conference on Artificial Intelligence, Cambridge, Massachusetts, pp. 559-565 (August, 22-25 1977). Hewitt, C. et al. [1973] "A Universal Modular ACTOR Formalism for Artificial Intelligence," Advance Papers of the Third International -- - Conference on Artificial Intelligence, Stanford University, Stanford, California, pp. 235-245 (August, 20-23 1973). Hewitt, C. [1975] "How to Use What You Know," Advance Papers of the Fourth International JointConferenceoArtificial Intelligence, Tbilisi, GeorgG, USSR, pp. 189-198 (September, 3-8 1975). Hughes, G. E. and Cresswell, M. J. [1968] An Introduction to Modal Logic (Methuen a: - - Company Ltd, LGdon, England, 1968). Kowalski, R. [ 19741 "Predicate Logic as a Programming Language," in Information Processing 74, pp. 569-574 (North-Holland Publishing Company, Amsterdam, The Netherlands, 1974). McDermott, D. [ 19781 "Planning and Acting," Cognitive Science, Vol. 2, No. 2, pp. 71-109 (April-June 1978). Minsky, M. [1974] "A Framework for Representing Knowledge," MIT Artificial Intelligence Laboratory, AIM-306, Massachusetts Institute of Technology, Cambridge, Massachusetts (June 1974). Montague, R. [ 19741 "The Proper Treatment of Quantification in Ordinary English," in Formal Philosophy, Selected Papers of Richard --- Montague, R. H. Thomason, ed., pp. 188-221 (Yale University Press, New Haven, Connecticut, and London, England, 1974). Moore, R. C. [1975] Reasoning from Incomplete Knowledge in a Procedural Deduction System MIT Artificial- Intelligence Laboratory, AI- TR-437, Massachusetts Institute of Technology, Cambridge, Massachusetts (December 1975). Also published by Garland Publishing, Inc. (New York, New York, 1980). Newell, A. [ 19801 "The Knowledge Level," Presidential Address, American Association for Artificial Intelligence, AAA180, Stanford University, Stanford, California (19 August 1980). nrinted in AI Magazine. Vol. 2. No. 2. pp. 112i) (Summer lsSl).- - Robinson, J. A. [19651 "A Machine-Oriented Logic Based on the Resolution Principle," Journal of the Association for Computing Machinery, El.-, No. 1, pp. 23-41 (January 1965). Sickel, S. [1976] "A Search Technique for Clause Interconnectivity Graphs," IEEE Transactions on Computers, Vol. C-25, No. 8, pp. 823-835 (August 1976). Warren, D. H. D. and Pereira, L. M. [1977] "PROLOG--The Language and its Implementation Compared with LISP," in Proceedings of the -- Symposium on Artificial Intelligence and Programming Languages (ACM); SIGPLAN Notices --' Vol. 12. No. 8: and SIGART Newsletter. No. 64, pp. ;09-115 (August 1977). Kowalski, R. [ 19751 "A Proof Procedure Using Connection Graphs," Journal of the Association for Computing Machinery,yol. 22, No. 4, pp. 573-595 (October 1975). Winograd, T. [1972] Understanding Natural Language (Academic Press, New York, Nem 1972). 433
1982
103
101
Johsn de Kleer and John Scely Wown XEROX PARC Cognitive and Instructional Sciences 3333 Coyote FIill Road Palo Alto. California 94304 lhis paper explores a particular kind of qualitative reasoning, called envisioning, that is capable of producing causal explanations for device behavior. It has been implemented in a computer program, ENVISION, which can analyze a wide variety of thermal, fluid, electrical, translational and rotational devices. Rather than present the technical details of the envisioning process, this paper exanlines the theoretical foundations upon which it is built. Many of these considerations are ones that any builder of qualitijtivc reasoning systems must pay attention to. Two such considcrat ions are explanation and robustness: What notion of causality is adequate for causal explanations of device behavior? How can there be any confidence in the analysis of a novel device? IN 1’ROh)UCTION The theory of envisioning [l] i?! has two central characteristics. First, it is a physics in that it can be used to predict the qualitative behavior of devices. Envisioning is not concerned with posf-hoc rationalization of obscrvcd behavior, but rather with constructing predictions that are consistent with obscrvcd device behavior. Second, it is a theory of causality in that it can be used Lo produce causal explaniitions acccptablc to humar,s. Such a theory of cnusal, qualita- tive reasoning is important for both cognitive science and artificial intclligcnce. I-:misioning is a form of reasoning that produces a causal explanAm for the behavior of a physical system by explaining how disturbances from equilibrium prt!pilgate. (Envisioning is often confused with qualitative simulation which it is only in its most dcgcncrate form. In more complex case< it is primarily conccrncd uith introducing and manipulatin g 2i5sumptions .whilc maintaining a notion of causality.) A typical kind of physical mcch;lnism we might cnvLion is a prcssilrc regulator (93: Figure 1). A prcssurc regulator’s pu~pwz it; to mnintnill <t specific pw;zurc even thw$1 line hds and prc’li.3l;e wurccs vary. compressed the explanation and stated it in English): “An increase in source (A) pressure increases the pressure drop across the valve (B). Since lhc jlow fhrough rhe valve is proportiotuzl to the pressure across it, the Jbw (hrougll rhe valve also increases. This increusc>d flow will increase 11re pressure ut rhe load (C). However, this increased pressure is sensed (0) causing the ilinphragm (I$) 10 move downwcc’rd ngffinsl rhe spring pt-esrurc. The Jiaphragm is mechaGca/Iy connecred lo the valve, so ihe dvwn,v,7rd tnovemenl of the diaphragm will tefd lo dose lhe valve thcr&y pinching qfl‘ rhe valve. Becczuse Ihe flow is 110~ resiricied Ihe ou/pu~ pressure wi!l ri.rc much less lhan il otherwise would have and lItus remains approxima!ely constant. ” A Figure 1 : Prcssurc Regulator As tllis explanation t3r the pressure regulator illuslrates, the task of cn\%ioning is a difficult one. It nilM deteriliinc tlic causal inputs and outputs for t,~h component. cthich can he a subtle task, cspccially v, lien a coniponcnt has inorc than two ports c<,nnccting it to otllcl c:,mpoilcnts cif tlF.2 de\+zc. It must detect znd correctly dewi.mine ti:c conscqtlcnccs of all tlic kw& elf positive ,IIIC: i:cgjativc fccdl:::c k. I’llrthclm:)re. since qualit:ltl\c (!cwip!itw prc!i?;de only 434 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. partial information it must be able to analyze undcrdetermincd of a special kind so that causal explanations for its conclusions can or underconstrained situations. IMailing the diRerent kinds of reasoning strategies that enable the envisioning process to achieve these goals is not the subject of this paper (see [;I); rather, WC examine the natwe of the input evidence that the envisioning process operates on, the conclusions it produces and the relationship between these two. An objective of the investigation is to cxplorc a theory of causal reasoning that can, given a physical situation (in parlicular, a novel situation), correctly predict ensuing behaL-ior in Ihat situation. The situations are described by us, the investigators. ‘l’his raises an enormous problern: even if the conclusions of the causal reasoning are correct, is its success attributable to the theory or to the way the situation is encoded? (We assume that the envisioning system has available a library of abstract descriptions of the behaviors of device parts.) Certainly, the causal reasoning process will rnake deductions not present in the description of the situation, but the question remains whether thcsc deductions form a sign$ccrnf portion of the total cfFort required to dcscribc and analyze a physical sitllation. Is causal reasoning doing something intcrcsting, or is most of the work it appears to be doing actuaily pre-encoded in the cvidcnce provided to it? One way to ensure that the “conclusions” have not been surrep- titiously cncotlcd into the evidence that the cnvisioner opcratcs on is to make the evidence a well-defined and distinct ontological type, distinct frotn that of the conclusions. In particular, we require that the cviclcnce bc a description of the physical structure of the system, namely its consrituent parts and how they are attached to each other. The conclusions describe the behavior, or functioning of the overall system. The task of causal reasoning is to cleducc the functioning of the system from its structure. be extracted from the solution process. h second suategy to ensure that the conclusions arc not pre- encoded in the evidence is to design the component rnodcls and the reasoning process to be “context free.” The same library of mcdels and the snme causal reasoning process shouId successfully analyze a wide variety of physical devices, particularly dcl<ices that have not been analyr.cd before or dcviccs under new operating conditions. WC call this meta-theoretic constraint the no-firizcircin-in-sinrctzire principle. The class of devices constructable from any particular set of components is, in principle, infinite. WC can only cheek our envisioning process on a finite subset of this infinite class and tic no-f7lnction-in-structure principle provides some confidence that it will also succeed on the untested dcviccs. As such the principle improves the descriptive aclcquacy of our causal theory. Take as a simple example a light switch. The model of a switch that states, “if the switch is oif no cur-rent flows, and if the switch is on current flows,” viol&s the no-function-in-structure principle. Although this rnodcl correctly describes the behavior of the switches in our offices, it is false in general as there are many switches for which current does not ncccssar-ily flow when they are closed (for example, two swirchcs in series). Current flows in the switch only if it is closed and there is a potential for current flow. DI3’ICF: TOPOLOGY A device consists of constituents. Some of these constituents are components that themselves can be viewed as smalIcr devices (e.g., resistors, valves, boilers). Other constituents are connections (e.g., pipes, wires, cables) through which the components communicate by transmitting information. Thcsc connections can bc thought of as conduits through which “stuff’” flows, its flow described by conduit laws. Part of the evidence is represented by the device fupoko~~y (see Inlet VALVE Outlet Egurc 2), in which nodes represent important components of the device and edges represent connections between them. Another part of the cvidcnce is the general model library. Each type of PRESSURE component and cormection has a specific model which describes its behavior in the abstract, independent of any colItcxt. A component model describes all potential behaviors of the. component in terms of qualitative equations on variables. For example, some important variables of a :noving object arc its position, velocity and accclcration. By modeling each component, the abstract qualitative behavior of the overall device is formalized as a set of qualitative equations. This set of equations is then “solved,” and the solution intcrprctcd in terms of the structure of the dcvicc. This solution process must be Setting SENSE Figure 2 : Device Topology of the Presstire Regulator Diffcrcnt types of conduits communicate different types of in- formation. I;or example, the model for the pipe between the boiler and the turbine of a steam plant communicates pressure and steam, whereas the model of a wire between a flashlight’s battery and its light bulb communicates voltage and current. Most conduit types can be modeled by two atlributes, one pressure-like and the other flow-like. For a fluid system, the two attributes arc volumetric-flow and pressure; for thermal systems, heat flow rate alld temperature; for translational systems, force and velocity; for rotational systems, torque and angular velocity; for electrical systems, current and volt- age. Envisioning is centrally concerned with qualitative incremental disturbances from equilibrium. This motivates the clijss of values (increments of velocities, voltages, flows, etc.) to bc positive, zero and negative in order to reprcscnt the direction of the change. if any, from equilibrium. The value of every attribute must be encoded as one of “-t,” “0,” “--” or “?” _ no other choices are possible. Arithmetic with these values is straight-forward, e.g., if z = “$-” and y = ” j-” then I: + y == “+” because if 5 > 0 and y > 0 then x+y>o. A component model characterizes all the potential behaviors that the component can manifest. It does not, however, specify which conduits connected to them are causal, that is inputs, alld which are outputs; that can only bc dctcrmincd in the broader context of how a particular component is used in the overall device. The qualitative behavior of a valve (a component of the prcssurc regulator) is expressed by the qualitative equation (called R confluence): AdP+ PdA - dQ = 0, where Q is the flow through the valve, P is the pressure across the valve and A is the area available for flow, and dQ, dA and dP represent disturbances from the equilibrium values of Q, A and P. In the situation where the pressure across the valve is positive (area is always positive) the expression simplifies to (using the qualitative calculus sketched out earlier): dP-+dA-dQ = 0. The confluence represents multiple competing tendencies, each encoding a different potent ial causal relationship. One such relationship is: if the area increases, but the flow remains constant, the pressure decreases. A single confluence often cannot charactcrizc the behavior of a component over its entire operating range. l’hus this range must be dibidcd into subregions each characterized by a different component state in which different confluences apply. For example, the behavior of the valve when it is completely open is quite different from when it is completely closed. IlIe behavior of each state is provided by three types of nlles. First, the model specifies the region of operation covered by the component state. For example, the closed state of valve is indicated by the condition [A = 01, i.e., that there is no area available for flow. From these rules cvisioning can determine what transitions between states are plausible. Second, the model provides confluences among a component’s variables. These rules are used to determine what state a component mi.&t be in and to test whether a transition can occur. Finally, the model includes confluences among the changes in component variables. These confluences dtscribc the incremental behavior of the component and are used in constructing causal explanations for dcvicc behavior. ‘l’hc full rnodcl for the valve is: OPEN: [A = A MAx], P = 0, dP = 0 WORKlNG: [0 < A < A MAX], P - Q = 0, dP $- dA-- dQ - 0 CLOSED: [A == 01, Q = 0, dQ == 0 From the state specifications it is straight-forward to identify the possible state transitions: OPEN: dA = ----• + WORKING WORKING: dA = -+ CI,OSED, d/1 = 4-d OPEN CI,OSED: dA -=I +=+ WORKING ‘IlIE L’;N\‘ISIONlNG PROCESS ‘The envisioning process perforlns three kinds of analysis. It must determine (1) which state(s) the overall device is in, (2) the causal behavior of the device in each of those states, and (3) the possible transitions between the device states. Each type has a dil’ferent form of explanation associated with it. However, in the remainder of this paper we will concern oursclvcs only with the causal behavior and its explanation (see [3] for more detailed discussions). By moclcling the behavior of each of the device’s constituents, the potential behavior of the system is expressed :ls a set of confluences among changes in variables (e.g., dP, dQ). ‘I’he type of the system (thermal, electrical, etc.) and the types of the variables (e.g., velocity, ctlrrent., etc.) become irrelevant. Envisioning anslyzcs these confluences to construct a causal explanation for the behavior of the system. There are numerous techniques for finding solutions (assign- ments of values to variables) to the conflucnccs of which relaxation, %L constraint satisfaction, is one. Although these techniques can predict correctly and satisfy no-firnction-in-structure, most are incapable of yielding any kind of reasonable explanation. For example, the besr explanation a constraint satisfaction technique can give for a solution is that the solution is an assignment of values consistent with the confluences of the component models. The explanations produced by envisioning are based on a very simplistic notion of caus::li ty which we call naive rrzec?~cnistn. A causal explanation consists of a series of effects on components, each of which is caused by previous effects on its neighboring components: El (the initial disturbance) causes E2 causes .,. E,. An efl’cct always occurs as a consequcncc of, and therefore after, 436 a cause. The consequences of an effect cannot immediately affect its causes. Causality concerns change and does not explain why the components are behaving the way they are, but rather how changes in these behaviors happen (i.e., how disturbances from equilibrium propagate). That is, we do not seek a causal explanation of how it reached a given quiescent state or why it stays in that state but rather we seek a causal explanation of how the system responds to disturbances from a quicsccnt state. The difficult task for envisioning is constructing explanations within this limiting framework - constructing predictions alone is relatively easy. ASSUMPTIONS AND PREDICTIVENESS Because the information available to envisioning is qualitative, the actual behavior of the overall device may bc underdctcrmined, i.e.. more than one coherent behavior is possible. Thus the concept of a correct behavioral prediction, which is central to our theory, needs to be spelled out. In order to analyze undcrdctermined situations envisioning introduces explicit assumptions which are subscqucntly reasoned upon. Thus, in undcrdetermincd situations envisioning produces multiple interpretations, each with different assumptions and corresponding to a different overall behavior. At a minimum, for a prediction to be correct, one of the interpretations must correspond to the actual behavior of the real system. A stronger criterion follows from observing that a structural description characterizes a wide class of diffcrcnt devices. The prediction produced by an envisioning is correct if (1) the behavior of each device in the class is described by one of the interpretations and (2) every interpretation describes the behavior of some device of the class. This undcrdetcrminacy has three immediate consequences. First, envisioning must bc able to deal with underdeccrmined situations, a topic that in itself is difficult. Second, other external knowledge, perhaps of the teleology or known ftmctioning of the actual device, is required to identify the correct interpretation. Third, the notion of naive mechanism must be extended to include heuristic assumption steps in causal explanations. CONCLUSION Qualitative reasoning is a difficult task and ENVISION is a substantial program capable of producing interesting analyses for device behaviors that surprise even its implementors. In and of itself this last statement says nothing unless it also includes some assertions about the input evidence it operates on. A reasoning system should not be evaluated on the nature of its conclusions, but rather the complexity of the relationship it establishes between its input and output. The input evidence to envisioning is structure - a device topology and the general library of component models. The output is fWction - device behavior and causal explanation of that behavior. The quality of the output is established by the predictivcness of the behavior and acceptability of the causal explanation. The no-function-in-structure principle merely ensures that structure and tinction arc truly kept distinct. ACKNOWLEDGMENTS BIBLIOGRAPHY [l] de Klecr, J. and J.S. Brown, “Assumptions and Ambiguities in Mechanistic Mental Models,” to appear in Mental E/iodeZs, edited by D. Gentncr and A. S. Stevens, Erlbaum, 1982. [3] de Kleer, J. and J.S. Brown, “The Theory and Mechanics of Envisioning,” Cognitive and Instructional Sciences, Xerox PARC, 1982. 437
1982
104
102
Diagnosis Using Ilierarchical Design Models Michael R. Genesereth Stanford University Stan ford, California 94305 Abstract: This paper presents a new algorithm for the diagnosis of computer hardware faults. The ?l(yorithrn uses a general inference procedure to compute &spect components and generate discriminatory tests from information about the design of the device being diagnosed. In the current implementation this procedure is linear-input resolution, guided by explicit meta-level control rules. The algorithm exploits the hierarchy inherent in most computer system designs to diagnose systems a level at a time. In this way the number of parts under consideration at any one time is kept small, and the cost of test generation remains manageable. 1. Introduction Being physical devices, computer components are subject to failure; and, given current design practices, the hilure of one component can lead to the malfunction of an entire computer system. This paper describes an automated diagnostician for computer hardware faults. ‘T’he program accepts a statement of a system malfunction iii a formal language, suggests tests and accepts the results, and ultinlatcly pinpoinLs the components responsible for the failure. In recognition of its jntendcd role as an assistant for humam field engineers, the program is called DAI<‘I‘ for 1)iagnostic - \ssistance Reference 1001. The DAILEY program was developed in the context of moderately successful work on medical diagnosis, as excmplificcl by such programs as CASNI-X [Weiss, Kulikowski], IN'I IIRNIS'l [I’ople 1975] [I’ople 19771, and MYCIN [Shortliffe]. However, there’s a difference. The medical diagnosis programs all utilize “rules” that associate symptoms with possible diseases. The DAKI’ program contains no information aboilt how computers fail. Instead, it works directly from i!lformation about itl/etldt’d structure (a machine’s parts and their interconnections) and c,~,~cclcd behavior (equations. rules, or procedures that rclnte inputs :md outputs). An important advantage of this approach is that it greatly simplifies the task of building diagrloslicians for new device?. Jf a designer uses a modern computer- aided design system, then when he is done his design will be online. The structural and behavioral information in this model can then be passed as data to This work was done with the cuppori ant! collaboration of the palo Alto Scientific Ccntcr and the Ficltl hginccrq IXvision of IfM. the DAIU program to diagnose its faults. Similarly, the design information could be passed to a program to generate manufacturing instructions and to anotller program to generate testing codes. The idea of using design information in automated diagnosis is hardly a new one. Over [he years a number of test generation algorithms have been proposed, the moyt well-knowr: of which is the d-algorithm [Roth et al]. Lts primary disadvantage is its runtimc. While the cost of executing a test is usually small s:nd the number of tests needed to pinpoint a fault is at worst linear in the number of components [Gocl], the cost of generating appropriate tests grows polynomially or exponend:~lly. (In fact, the problem is NP-complete [Sahni and Ibarra].) Since the d-algorithm works only at the gntc level, it is impractical for circuits the size of current COtilpLilCr systems. The D/\KI’ program meets this difficulty by exploiting the hierarchy inhcrcnt in most computer system designs. (See lignre 1.) The program first diagl:oscs the system at a high level of abstraction to cletelmille the major sabcomponcnt in which the fault, lies (e.g. the acicli:r A :). It Llicrl focusses its atlcntion on the next lower level (e.g. finding the full-adder F2) and then repeals until it can identify a replaceable part (e.g. the xor gate ~3). In this way, the number of components under cor!sidcration at any one time is kept small, and the cost of test generation remains manageable. Within each level DAII’~ LECS a dcductivc prccedure to coniptitc suspects and gcncrate tests. AlI !~)‘I?l~~tOfnS are expressed as violations of expected bch:llV,ior. star-t i r1g with a symptom of this sort, DART rcasoris backwards from the expected behavior to discover why it was expected and in so doing produces a justification for its conclusions. Since the expected behavior was not observed, all of the parts mentioned in this .justification are suspect. The next step is to gcncratc a test to discriminate amung thcsc suspects. D/II<I’ starts with a behavioral rttlc for one of the suspects and works fijrward to observable outputs altld backward5 to modifiable inpuLs. The result of this step is an expectation for certain outputs whcrl certain inputs are applied. If &he o~~tputs do not have the values expcctcd, then 011~ of the parts mentioned in th: derivation of the test must be broken. Section 2 presents a simplified version of the design description language used by DAI<Y, and section 3 278 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. describes the diagnostic procedure itself. Note that, although the examples in this paper are all time- independent, the language is eq~laliy versatile at describing time-dependent behavior, and the DAK’L’ algorithm generalizes as well. Section 4 offers an analysis of the completeness and efticiency of the algorithm anti pinpoints some of its shortcomings. The conclLlsion discLlsses the state of implementation and testing and sLmmarizes the key points of the paper. This paper is an abridged version of an earlier paper [Genesereth]. D74 I 1 Ml - D r A 11 Al G B I M2 -‘E C I A2 H F - M3 Al r F4 F3 F2 . Fl I I I 2. Design F2 -- Figure 1 - A Hierarchical Design Models The DART procedLlre expects as data a full description of the strucLLlre and behavior of the circuit to be diagnosed. This information must be in the form of data base assertions in a design description 1angLlage called SUIYII E. This language has the advantage of being eqLlaliy expressive at all levels of description; and, importantly, it is adeqLlate for encoding tests and assumptions aboLlt the circuit, e.g. the single faLlit assLmiption or nonintermiltency. The syntax of Sufj’I’I_E is that same as that of predicate calcut~~s, and the ~LIICS should be apparent alter a few examples. The vocabulary is described in the fo!iowing sections. Three syntactic conventions are used in this paper to simplify the examples. IJpper cast itltters are Llsed exclusively for constants, fLInctions, and rctations, while lower case letters are Lrsed for variables. Prefis universal quantifiers are dropped, and ail free variables can bc assumed to be LmiverSaiiy quantified. I+3nally, fol the sake of brevily underlining is used to denote negation. 2.1 StructLlrai Vocabulary The struct.LIrc of a device is specified by describing its parts and their interconnections. ‘The strwturc of each part can in turn be described Llntii one reaches one’s “primitive” components (which are Llsualiy characterized behaviorally). Tn SUK~I.F~ each part is designated by an atomic name (e.g. Al), The type of each part is deciarcd Losing type relations. The following assertions declare the types of the toplcvei components of the circuit in figure 1: MI, M2, and M3 are multipliers; ~1 and ~2 are adders. MULT(M1) MULT(M2) MULT( M3) ADDER(A1) ADDER( AZ) Every device in SUIKIX has zero or more inputs and outpL~ts, and these “ports” are designated Losing the firnctions INi and OUTi. i-‘or cxampie, IN2(Ml) woLrlc1 designate the second input of MI, and ou-r2( FAN) would designate the second outpLrt of FAN. lf a device has only one input or oL\tput, the trailing digit is omitted. Connections are made between the ports of devices. The next 4 assertions specify the wiring diagram for D74. For esample, the first assertion states that the output of Ml is connected to the first inpLit of Al. CONN(OUT(Ml), INl(A1)) CON~J(OUT(M2), IN2(Al)) CONN(OUr(M2), INl(A2)) CONN(OUT(M3), IN2(A2)) 2.2 Behavioral Vocabulary The behavior of a circuit can r~s~~aiiy be expressed in terms of the signal values at the circllit’s inputs and 0Litputs. In SUITI‘I,II, the signal at an inpirt or oulpLlt at a given time is specified by superscripting the inpLlt or oL?tpLlt fLlnction and eqLlating it to its value. For example, the first assertion below skates that at time I, the second iripLlt of Al is 3. The second assertion states ttl:ll the outplit of Ml is aiways 1. IN2l(Al)=3 OUTt(M1)=l The simplest form of behavioral specification is a set of rules relating a circLlit’s inpLlts and outputs. For eumpie, the behavior of an adder can bc captured by the fdiowing rules. OK(p) is intended to m&m that p is operational, i.c. not broken. ADDER(r) A OK(r) A INlt( r)=x A INZt( r)=y + OUTlt( r)=x+y ADDER(r) A OK(r) A INlt( r)=x A OUTlt( r)=z -+ IN2t( r)=z-x ADDER(r) A OK(r) A OUTlt( r)=z A IN2t( r)=y -+ INlt(r)=Z-y 279 As with structural information, behavioral descriptions are frequently hierarchical, with signals being characterized differently at one level of the structural hierarchy than at another. Tn order for DART to diagnose a circuit, it must have a formal statement of the relationship between the signals at these different levels. A good way of encoding this information is in the form of propositions relating the inputs and outputs of a device at one level with their counterparts at the next lower level. For example, the followjng pi-opositions capture the mapping between the integers manipulated by an adder and the bits manipulated by its subparts. INlt(Al) = INlt( F4)*8 + INlt(F3)*4 + INlt(F2)*2 + INlt( Fl)*l IN2t(Al) = IN2t(F4)*8 + IN2t(F3)‘4 + IN2’(F2)“2 + IN2t(F1)*1 OIJTt(A1) = OUTlt(F4)*8 + OUTlt(F3)*4 + OUT2t(F2)“2 + OUT2t(Fl)*l Finally, there is a set of behavioral rules for connections. The following proposition states that, if two terminals are connected, they always bear the same signal. CONN( OUT( r) , I?l( s)) --+ OIJTt( r)=INt( s) 2.3 Fault Assumptions Two assumptions that are important to the efficiency and power of test generation procedures are the single fault assumption and nonintermitlcncy. Nei thcr is essential Ibr the operation of the DART algorithm. Ho\+ever, by adding formal statements of these assumptions to DAR~L”S global data base, one can get the algorithm to take advantage of them, The single fault assumption states that at most one compone!it in a circuit is faulty. This idea can be ti,rmalized by stating tint the non-funclionnlity of any component implies the functionality of all t!~ rest. For the top-level parts of the clcviie in figure 1, this would lead to the following propositions. OK(M11 + OK(B12) A OK(M3) A OK(A1) A OK(A2) OK(M21 --, OK(M1) A OK(M3) A OK(A1) A OK(A2) OK(M3) -+ OK(M1) A OK(M2) A OK(A1) A OK(A2) OK(A1) --> OK(M1) A OK(M2) A OK(M3) A OK(A2) OK(A2) + OK( Ml) A OK(M2) A OK(M3) A OK(A1) The nonintermittency assumption states that all devices behave consistently over time. This is patently false in general, for it implies that no part can ever fail. However, it is often a reasonable assumption to make for the duration of a diagnosis, and it is essential in the derivation of some powerful tests. The proposition belob encodes the assumption formally. It states that, if a device with sepcific inputs has a specific output at one time, then given the same inputs it will have the same output at any other time. INIS( t-)=x A IN2S( r)=y A OUTS(r)=z A INt( r-)=x A IN2t( r)=y -+ OUTt(r)=z 3. The DART Algorithm The DART procedure begins with the design description for the device under test and .a set of observed symptoms. It produces as output the minimal set of replaceable parts that will correct the error. Especially uscfLl1 to the procedure are the assumptions that the fault occurs within a single replaceable part and is not intermittent. While neither assumption is logically necessary, the performance of the algorithm degrades without them. Figure 2 presents an overall flowchart. The first step is to diagnose the fault at the highest level in the structural hierarchy. This may result in more than one part jf the problem is not diagnosable at that level. If the implicated parts are replaceable, the diagnosis is complete. Otherwise, DART examines them at the next lower level of detail. Diagnose at Current Level (see figure 3) Figure 2 - Overall View of DART Algorithm The algorithm treats each level of the hierarchy in the same way. Figure 3 presents a’ flowchart for the diagnostic component. The algorilhm first uses information about the symptoms to compute a list of suspect parts. *If this list contains only a single element, the dmgnosls is complete. Otherwise, the next step is to devise a test to discriminate the suspects. The test is then executed, and the process repeats. If no fin-ther discriminatory tests can be found, the problem is undiagnosable and a list of the remaining suspects is returned as value. The workhorse of the DART algorithm is a general inference procedure. In the current implementation this procedure is linear-input resolution, guided by a set of explicit meta-level control rules. 280 I 4 Compute Suspects I yes no lf Generate Tests Execute Test Figure 3 - DAW Algorithm 3.1 Computing Suspects 1 Eat 1 Level The goal of suspect computation is a proposition of the following form, where each pi is a statement about the StI-LlCtLlIX Of the CirCLiit, e.g. OK(A1) or CONid(OUT(Ml), INl(A1)). Pl v , . . v Pn - - In generating suspects DAW starts with a symptom, i.e. the negation of an expectation, and resolves it with the rules in the device’s design model until it produces a proposition of this form. As an example, consider the circuit shown in figure 1. Assume that for inputs A=l, B=l, and C=3 the device produces 2 as the value of output G instead of the correct result 4. Since G=4 and ADDER( Al), DART can conclude (EL! v ~=3 v OK). The only way that E could be true is if (~=l v & v OK(Ml)), and the only way that E=l could be true is if (A=1 v c=3 v OK(A1)). Since the - - inputs A, B, and c are known, the system can conclude Ihe following suspect preposition. OK(Pd1) v OK(M2) v OK(A11 One thing to note about this technique is that it is equivalent to proving the expcctcd value and then looking at the dependencies fc)r all propositions of the form otq p) or CofW( x ,y). Another thing to note is that suspect computaiion is not simply a matter of tracing the circllit diagram backwards from a faulty output. On the one hand, this can lead to too many suspects. For example, if the outpul of a multiplier is expected to bc zero because its second input is zero, then in computing suspects for a nonzcro output, it isn’t necessary to look at the predcccssors 01‘ the first Input. On the other hand, simply tracing b:lckwards can also lead to too few suspects. For example, it would rule out the possibility that components elscwherc in the circuit could have any bearing on the syrnplom and so would make it impossible to diagnose short circuits. 3.2 Generating Tests The goal of DAR’~“s test generation procedure is a proposition of the following form, where each of the pi is a modifiab!c input, each of the pi is an immediate subcomponent of the device being diagnosed and o is an observable output. In order to have discriminatory power, the pi should not include all of the current SLlspccts. OK(p1) & . . & OK(pn) & 11 & . . . & In => 0 In generating tests, DAR?’ starts with a behavioral rule for one of the suspects and generates conclusions using the rules from the device’s .design model until a proposition of the appropriate Folm is generated. As an example, consider the generation of a test to discriminate the suspects in lhe example introduced in the last section. Starting with a rule for the multiplier M2, L3AR’r is able to produce a test with the same inputs. OK(M2) & OK(M3) & OK(A2) => I(=6 If B., then the fault must lie in one of M2, ~3, or ~2. ~3 and A2 are not mentioned in the previous suspect list; and so, under a single fault assumption, they are exonerated; and the fault must lie in M2. Jf t1=6, ~2 can be exonerated by a little deduction. If 142 is responsible i’or the erroneous value on output G, it must be because its ouput is wrong. Therefore, if M3 and A2 are okay, H must also be wrong. Since ~=6 as expected, ~2 must be okay. As another example consider the problem of discriminating the remaining two suspects MI and AI. D/~I<.I. reasons as follows. I f the fault lies in ~1, then AI is ok: and, for G to be 2, D must be -I. Therefore, if E is changed to 4, G must change to 3. If it doesn’t, then Al is broken, and ~1 can be exonerated. (Note that, fa4 appears as predicted, this does not exonerate Al.) Next the program figures out that, in order to get a 4 on ine E 281 wlthout drsturbmg the inputs to ~1, input c must be changed to 4. As a proposition this test can be represented as follows. OK(A1) & A=1 & B=l & C=4 => G=3 In this case suppose that G changes to 5. This is correct but not the predicted result. In light of the previous symptom, it demonstrates that ni is faulty. 3.3 Control Each of the steps in the DAIU algorithm uses the resolution rule to derive new conclusions until appropriate termination criteria are met. For example, in generating tests DAIU begins with a behavioral rule and draws conclusions using the facts from the circuit’s design model. From these conclusions, it draws other conclusions and so on until a “test” is derived. If the resolution rule were applied without guidance, it would produce numerous useless propositions. To mitigate this problem, DART draws only those conclusions in which at least one of the resolvents is an assertion in the design model. No resolutions are performed between conclusions and other conclusions. This restriction is often called the “linear input strategy” [Nilsson] and is very effective in preventing an undesirable proliferation of useless conclusions. DAR-~ orders its deductions by the size of the conclusion relative to the sizes of the resolvents, with the smallest being processed first. 4 special case of this occurs when one of the resolvents is an atomic proposition, in which case the conclusion has one fewer disjuncts than the other rcsolvent. This is often called the “unit preference strategy” [Nilsson]. DART also postpones the instantiation of variables as long as possible. This is equivalent to the constraint propagation technique used in the revised d-algorithm and described more recently in the Al literature [Sussman, Steele] [SLcfik]. 4. Analysis of the Algorithm The two key factors in analyzing a diagnostic procedure are its diagnostic completeness and computational efficiency. 4.1 Completeness The DAR’I algorithm is almost complete in that it is can generate most tests of this form; and a slight variant (ancestry-filtered resolution), though computationally more expensive, is guaranteed to generate every test. Even so, due to the limited vocabulary at a given level of description, the set of tests so defined is sometimes inadequate to localize every fault. likely to be exponential in the number of subparts involved. In the worst case, the DART algorithm is no better than any other procedure. However, for many circuits the cost of diagnosis as a function of the number of components is approximately the same at each level of description; and, when this is the case, hierarchical diagnosis leads to substantial computational savings. Assuming that at each level the cost of diagnosis is some function Fi and the number of gates is Ni, the overall cost of hierarchical diagnosis can be expressed as follows, where t-f is the number of levels. su”M Fi(Ni) i=l The computational advantage of hierarchical diagnosis is most apparent when the number of gates and the cost function are constant from level to level. Then, for a device of H levels, with N gates at each level, the cost of non-hierarchical diagnosis is WH), whereas for hierarchical diagnosis, the cost is only H:~F( N). This saving is of special importance when the cost function F is non-linear, but even in the best case of a linear cost, the hierarchical approach still offers a logarithmic ad vantage. 5. Conclusions The DART algorithm has been implemented and tested on a variety of examples. The implementation was done in COMMON List’ [Steele and Fahlman] with the help of a data base and inference system called MRS [Genescreth, Greiner, Smith]. The examples include simple circuits like the one in the introduction as well as complex systems like the teleprocessing facility of the IBM 4331. In all cases the algorithm was able to generate appropriate tests and diagnose the underlying fault. Running on a VAX-11/780, the time to diagnose each case w?s on the order of minutes. While this is not particularly good for small circuits, it could be improved significantly by more efficient programming style. And, importantly, the program is able to diagnose large circuits without significant slowdown. In summary, the key contributions of this research are (1) its demonstration of the value of hierarchical design models in diagnostic test generation and (2) the design of the DAI<~ algorithm. DAKY utilizes the hierarchy inherent in most computer systems to keep the number of parts under consideration small and thereby reaps substantial computational savings. Furthermore, since the method works directly with design models, rules associating symptoms with specific failures are unnecessary. For these reasons, the algorithm appears to be a promising way of coping with the increasing complexity of modern computer system designs. 4.2 Efficiency The problem of test generation has been shown to be NP-complete [Ibarra and Sahni], and so in the worst case the runtime of a complete test generation procedure is 282 References J. S. Bennett and C. R. Hollandcr, Dart, Proceedings of the 7th International Joint Conference on Artificial Intelligence, August 1981. M. A. Brcuer, M. Abramovici: “Fault Diagnosis Based on Effect- Cause Analysis”, 17th Design Automation Conference Proceedings, June 1980. M. R. Genesereth, R. Greinet, D. E. Smith: “MRS Manual”, HPP-81-6, Stanford University Heuristic Programming Project,, December 1981. M. R. Genesercth, M. Grinberg, J. Lark”, HPP-81-11, Stanford University Heuristic Programming Project, September 1981. M. R. Genesereth: “The USC of ITierarchical Design Models in the Automated Diagnosis of Computer IIardware Faults”, HPP-S l-20, Stanford University Heuristic Programming Project, December 1981. P. Gocl: “Test Generation Cost Analysis and Projections”, 17th Design Automation Conference Proceedings, June 1980. 0. H. Ibarra and S. Sahni: “Polynomially Complete Fault Detection Problems”, IEEE Trans. on Computers, Vol C-24 No 3, March 1976, pp 242-250. N. Nilsson: Princiyles of Artt$cial Itllelli~ence, Tioga Press, 1980. H. Pople: “7’hc Dialog Model of Diagnos.tic T>ogic and Its Use in lntcrnal Medicine”, Proceedings of the Fourth International Joint Conference on Arti!icial Intelligence, 1975. H. Pople: ” The Formation of Composite Hypotheses in Diagnostic Problem Solving - An Exercise in Synthetic Reasoning”, Proceedings of the Fifth international Joint Conference on Artificial Intclligcncc, 1977, pp 1030-1037. J. A. Robinson: “A Machine-Oriented L ogic Based on the Resolution Principle”, Journal of the Association for Computing Machinery, Val. 12 No. 1, 1965, pp 23-41. J. P. Roth, W. G. Bouricius, P. R. Schneider: “Programmed Algorithm to COI?ipUtC ‘I’csts to I)ctcct and IXstinguisli l:aults in I.ogic Circuits”, 1111X ‘I’ra!lsactions on tllcctronic Computers, Vol t:C-16 No 5. October 1967. E. Shortiiffe: AIYCIN: Compuler-Based Medical Consullation, American Elscvicr. 1976. G. J. Sussman, G. I,. Steele: “Constraints - A Language for llxpressing Altllost-I-Iicrarchical Descriptions” Arlif;cid itilelligettce Vol 14, 1980, pp l-39. G. I,. Steele, S. 1’. Fahlman: “COMMON I&F Manual”, SPlCE Projccl, Carnegie-Mellon University, Spctcmbcr 1981. M. Stcfik: “Planning with Constraints”, AtT$cial It~lelligctrce Vol 16, 1981, pp 111-140. S. W&s, C. Kulikowski, A. Safir: “A Model-Based Consultation System for the Long-l’crm Management of Glaucoma, Proceedings of the Fifth International Joint Conference on Artificial Intelligence, 1977, pp 826-832. 283
1982
11
103
THE CRITTER SYSTEM: ANALYZING DIGITAL CIRCUITS BY PROPAGATING BEHAVIORS AND SPECIFICATIONS Van E. Kelly Louis I. Steinberg Department of Computer Science Rutgers University New Brunswick, NJ 08903 ABSTRACT CRITTER is a system that reasons about digital hardware designs, using a a declarative representation that can represent components and signals at arbitrary levels of abstraction. CRITTER can derive the behaviors of a component’s outputs given the behaviors of the inputs. it can derive the specifications a COmpOnent’S inpUtS must meet in order for some given specifications on the OUtpUtS to be met, and it can verify that a given signal behavior satisfies a given specification. By combining these operations, it evaluates both the correctness and the robustness of the overall design.* I INTRODUCTION: REASONING ABOUT DIGITAL CIRCUITS In understanding or explaining a digital circuit, a human engineer’s reasoning is flexible in several ways: 1. Behavior vs. Specification: The engineer can reason about what will happen in the circuit, given some input (its behavior), or about what ought to happen in order for the circuit to perform as desired (its specifications). In a correctly working circuit, the behavior must satisfy the specifications, but the two need not be identical; especially in matters of timing, “over-satisfaction” of some specifications is good practice. 2. Forward vs. Backward: The engineer can either use some fact about the inputs to a component to infer something about the output (forward reasoning), or vice-versa (backward reasoning). 3. Level of Abstraction: In reasoning, the engineer can treat each physical component separately or can view several components as a single functional module. Also, the engineer can view electrical signals at different levels of abstraction, e.g. as sequences of high/low voltages or as ASCII characters. *This material is based on work supported by the Defense Advanced Research Projects Agency under Research Contract NO00 14-8 1 --K-0384. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the U.S. Government. 4. Degree of Specificity: The engineer can reason about the value of a signal either at a specific time or over all time (e.g., “The clock can never rise while this signal is high.“) Also, the engineer can deal with specific data and time values, e.g. ” 10 nsec. after the clock rises”, or general predicates, e.g. “between 5 and 10 nsec. after . ..‘I. We have developed a system, called CRITTER, that displays some of this same flexibility. CRITTER does forward reasoning about behaviors and backward reasoning about specifications, propagating information step by step through a circuit in a manner reminiscent of constraint propagation systems [ 1, 2, 3, 4, 5, 61 and Waldinger’s Goal Regression [71. CRITTER handles varying levels of abstraction, statements about specific values or more general predicates, and statements quantified over all time. CRITTER is a system for “critiquing” digital circuits. That is, it is designed to deductively solve the following problem: Given the behavior of a circuit’s inputs and the specifications on its outputs, determine the behavior and specifications for each signal in the circuit and whether each signal’s behavior meets all its specifications. We also believe CRITTER’s reasoning methods will prove valuable in other design tasks besides critiquing. In particular, our work on CRITTER has been part of a project on redesign aids [81. We also envision applications in creating, debugging and documenting designs and in trouble-shooting physical hardware. II THE ANATOMY OF CRITTER In this section we will describe how CRITTER represents and reasons about circuits. First, however, we will introduce the circuit we will be using as a running example. A. The Example Circuit The circuit is shown in Fig. II- 1 For purposes of exposition, it is a bit simpler than the real circuits on which we have used CRITTER. (See Sec. Ill) This circuit is designed to convert ASCII alphanumeric characters to their EBCDIC equivalents. Every 576 nanoseconds a character appears at the input ASCII-IN, and is held there for 300 nsec.. After each character appears, the clock signal CLK will rise. The actual translation to EBCDIC is done by ROM tread- only-memory) R, but this particular ROM requires its input to be available for at least 500 nsec. The input characters 284 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. ASCI I-IN CLK Latch L LATCHED- ASCI I Rom R Figure 11-l: THE EXAMPLE CIRCUIT: EBCDIC-OUT are only available for 300 nsec. so we also have latch L which samples a character each time CLK rises and keeps it stable for the ROM until the next rise of CLK. B. Representation of Circuits The physical structure of a circuit is represented by modules and data-paths. A module represents either a single component or a cluster of components being viewed as a single functional block. In Fig. II- 1, both the latch and the ROM are modules, and we could regard their composite as a module as well if we chose to do so. Similarly, a data-path represents either a wire or a group of wires. 1. Signals The data flowing on a data path is viewed as a data- stream, which represents the entire history of the data on the path as an infinite sequence of data elements, i.e. as an array with subscripts running from 1 to infinity. Each element is characterized by a set of features, such as its TYPE, VALUE, START-TIME, and DURATION. The behavior of a data-stream is represented by a formula for each feature, giving the value of the feature for an element as a function of the element’s subscript For instance, Fig. II-2 gives the set of behavior formulas for the ASCII characters at the input to the LATCH. (The syntax used by the CRITTER has been slightly modified here and in the other examples for the sake of readability.) Essentially Fig. II-2 says that For all i from 1 to infinity there exists a unique data element ASCII-IN(i), whose TYPE is . . . . whose VALUE is . . . . etc. Note that all times in the circuit are given relative to some arbitrary TO. NAME of signal = ASCII-IN INDEX-STRUCTURE = ([i (from 1 to +INFINITY)]) TYPE (i) = ASCII-CHARACTER VALUE (i) = NOT-SPECIFIED START-TIME (i) = (576*i + 435) nsec. after TO DURATION(i) = 300 nsec. Figure II -2: GIVEN BEHAVIOR FOR ASCII-IN A signal can also be described as an array of more than one dimension; the additional dimensions describe the substructure of the larger repeating elements. For example, Fig. II-3 gives the behavior for the other input signal, the clock pulse for the latch It says that each element of the data-stream has subparts numbered 0 to 3, and gives formulas for the features of the subparts. It is interesting to note that the two input signals are described at two very different levels of abstraction. The clock is described in terms of high and low voltages, which are even less abstracted than bits. The other input is described as a stream of characters, leaving implementation details implicit, including the fact that an ASCII character requires 7 bits to represent. and thus the LATCH module must really be 7 physical latches. The ability not only to abstract. but to mix levels of abstraction freely in a single circuit, is a powerful tool for focusing CRITTER’s attention on the critical analytical issues within each circuit and suppressing insignificant detai1.w NAME of signal = CLK INDEX-STRUCTURE = ([i (from 1 to +Infinit )I [phase# (from 0 to 3) ) 3 TYPE (i ,phase#) = VOLTAGE-LEVEL VALUE (i , phase& = (CASE phase# of 0: RISE 1: HIGH 2: FALL 3: LOW) START-TIME (i , phase#) = [576*(i+l) + (CASE phase# of 0: 20 1: 2: 3: 93) 1 nsec. after TO DURATION (i ,phase#) = (CASE phase# 0f 0: 1 PERIOD 1: 2: i’ 3: 503) nsec. = 576 nsec. Figure II -3: GIVEN BEHAVIOR FOR CLK 2. Modules A module’s operation is described by a set of operating conditions and a set of mappings. An operating condition is a specification (i.e. a predicate) on a module’s inputs that must be met if the module is to work. For example, in Figure 11-4, operating condition [41 states that the clock for a given character must occur at least 20 nsec. after the character starts, i.e. the “setup time” for this latch is 20 nsec. Condition [61 says there must not be a clock between characters. For each output of a module there gives a formula for is a mapping which each feature of the output in terms of -Note, however, that CRITTER does not choose abstractions; they must be supplied from outside. 285 its own Cl1 l CLK must fit the definition of a STANDARD-CLOCK. (i .e. two dimensions, periodic, etc.) c21 l INDEX-STRUCTURE of ASCII-IN must match ‘([* (from 1 to +Infinity)])‘. t31. FOR ALL i > 0, j IN (1,3] DURATIONCCLK (i j)] >= 20 nsec. 143 * FOR ALL i > 0, the time of the NEXT (RI SE of CLK) after START-TIME[ASCI I-IN (i)] >= START-TIME[ASCI I-IN (i)] + 20 nsec. c5 3 l FOR ALL i > 0, the time of the NEXT (RISE of CLK) after START-TIME[ASCI I-IN (i)] < START-TIME[ASCI I-IN (i)] + DURATlON[ASCl I-IN (i)]. t31. FOR ALL i > 0, the time of the NEXT (RISE of CLK) after (START-TIMECASCI I-IN (i)] + DURATlON[ASCl I-IN (i)]) > START-TIME[ASCI I-IN (i+l)]. Figure I I -4: OPERATING-CONDITIONS OF LATCH L NAME of output = LATCHED-ASCI I INDEX-STRUCTURE = ([i (from 1 to +Infinity)]) TYPE (i) = TYPECASCI I-IN(i)] VALUE (i) = VALUE[ASCl I-IN (i)] START-TIME (i) = (the time of the NEXT (RISE of CLK) after START-TIME[ASCI I-IN (i)]) + 35 nsec. DURATION(i) = PERIOD[CLK] - 35 nsec. Figure II -5: INPUT/OUTPUT MAPPING OF LATCH L features of the inputs. For example, Figure II-5 says that the output LATCHED-ASCII is a data-stream with elements from one to infinity, and tells how features of these elements depend on features of CLK and ASCII-IN. Note that a mapping is only reliable if all of the module’s operating conditions are met by its inputs. As another example, Fig. II-6 conditions and mapping for ROM R. operating C. Reasoning Methods Using these representations, CRITTER can derive the behaviors of a module’s outputs given the behaviors of the inputs, it can derive the specifications a module’s inputs must meet in order for some given specifications on the outputs to be met, and it can verify that a given behavior satisfies a given specification. We will now discuss these kinds of reasoning. 288 OPERATING-CONDITIONS: Cl1 l INDEX-STRUCTURE of input LATCHED-ASCII must match ’ ([at (f rom 1 to +INFINITY)])’ c21 l FOR i > 0 TYPE [LATCHED-ASC I ! (i) ] = ASC I I -CHARACTER c31. FOR i > 0 DuRATi 0N [LATCHED-ASC I l (i)] > 500 nsec. INPUT/OUTPUT MAPPING OF ROM: NAME of output = EBCD I C-OUT INDEX-STRUCTURE = ([i (from 1 to +Inf inity) 1) TYPE(i) = EBCDIC-CHARACTER VALUE (i) = ASC-TO-EBCD (VALUE [LATCHED-ASC l l (i) 1) START-TIME (i) = START-TIME [LATCHED-ASCI I (i) 3 + 500 nsec. DURATION(i) = DURATION [LATCHED-ASCI I (i)] - 250 nsec. Figure 11-6: DESCRIPTION OF OPERATION OF ROM R I. Propagating Behaviors Forward To calculate the behavior of a module’s outputs, given the behavior of its inputs, one symbolically applies the module mappings to the inputs, by a process of substitution. The behavior of the output resembles the mapping, but with every reference to a feature of the input replaced by the corresponding formula from the input’s behavior. For instance, to calculate the DURATION of the latched-characters we substitute the behavior of the CLK from Figure 11-3: PERIOD = 576 nsec. into the DURATION formula of Figure 11-5: DURATION(i) = PERIOD(CLK) - 35 nsec. = 576 nsec - 35 nsec. = 541 nsec. On the other hand, since nothing is specified in Figure II-2 about the VALUE of each incoming character, nothing can be substituted in the VALUE formula of the LATCH’s I/O mapping, so the formula VALUE (i) = VALUE[ASCl I-IN (i)] is all that we know about that feature of the output behavior. A complete calculation of the behavior of the output of the latch is given in Figure II-7 Note that this substitution operation depends on the input behavior being represented as <feature> = <formula> rather than in the more general predicate form we use for operating conditions. The mapping must also be represented as <feature> = <f ormuia? If not, the substitution can still be done, but it will result in a behavior which does not have the <feature> = <formula> form and cannot be further propagated. NAME of signal = LATCHED-ASCII INDEX-STRUCTURE = ([i (from 1 to +INFINITY)]) TYPE (i) = ASCII-CHARACTER VALUE (i) = VALUECASCI I-IN (i)] START-T I ME (i) = [576* i + 6311 nsec. after TO DURATION(i) = 541 nsec. Figure 11-7: CALCULATED BEHAVIOR OF OUTPUT OF LATCH Forward propagation can generate some messy expressions. CRITTER does some algebraic transformations to simplify expressions as much as it can. For instance, in calculating the START-TIME of LATCHED-ASCII it transforms [NEXT (5765t (j+l) + 20) AFTER (576*i + 435)] + 35 into 576*i + 631 Of course, the forward propagation is not valid unless the module’s operating conditions are met, but this is checked as part of checking in general whether a data- stream meets its specifications, which is discussed below. By repeated propagation, CRITTER can produce a behavior for each data-stream in a circuit, given behaviors for the circuit’s inputs. Figure II-8 gives the calculated behavior of the ROM’s output, produced by propagating the behavior of LATCHED-ASCII one step further. NAME of signal = EBCDIC-OUT INDEX-STRUCTURE = ([i (from 1 to +INFINITY)]) TYPE (i) = EBCDIC-CHARACTER VALUE (i) = ASC-TO-EBCD (VALUE [ml l-IN (i)]) START-T I ME (i) = (576*i + 1131) nsec. after TO DURATION(i) = 291 nsec. Figure II -8: CALCULATED BEHAVIOR OF ROM OUTPUT 2. Propagating Specifications Backward Just as forward propagation computes the behaviors of the data-streams, CRITTER uses a process of back propagation to derive the specifications that these behaviors must meet. More specifically, given specifications (i.e. predicates) involving the output of any module, CRITTER can back propagate them, that is, derive a set of specifications involving the inputs to that module which are sufficient to ensure that the original output specification will be met (This is a hardware analog of Dijkstra’s “Weakest Precondition” [9] ). Thus, given specifications involving the “global” outputs of a circuit, CRITTER can repeatedly back propagate them to produce specifications for all data- streams in the circuit. Like forward propagation, back propagation is also done by a process of substitution. Each reference in the specification to a feature of the module’s output is replaced by the corresponding formula from the module’s mapping. For instance, Fig. II-9 gives a set of specifications for the output of our example circuit Back- propagating specification [3] of Figure II-9 through the mapping of the ROM and simplifying gives: For all j > 0, DURATION[EBCDIC-OUT(i)] > 200 ==-> DURATIONCLATCHED-ASCI I (i)] - 250 > 200 ==> DURATIONCLATCHED-ASCI I (i)] > 450 nsec. This substitution also depends on the l/O mapping being represented as <feature> = <f ormuia’. Of course, in order to produce the right output, the inputs to a module must first meet the operating conditions of the module. So, in addition to the specifications produced by substitution, the specifications on the input to a module must also include the operating conditions. It is this complete set of specifications which is further back propagated. [I] INDEX-STRUCTURE of EBCDIC-OUT must match ‘([i (from 1 to +INFINITY)])’ [2] FOR ALL i > 0, VALUE (i) = ASC-TO-EBCD (VALUE [AXI l-IN (i)]) [3] FOR ALL i > 0, DURATION(i) > 200 nsec. [4] FOR ALL i > 0, START-TIME (i+l) - START-TIME(i) > 500 nsec. Figure II -9: SPECS INVOLVING STREAM EBCDIC-OUT 3. Checking Behaviors Against Specifications Having derived the behavior and specifications for data-stream, CRITTER must check to see if specifications are met In our case studies to CRITTER has been able to do this by replacing feature-reference in the specification with each the date, each the corresponding formula from the behavior, and then applying very straightforward simplifications to produce propositions which can be checked trivially. For instance, specification [4] in Figure Ii-9 reduces to the inequality 576 > 500, by simple symbolic subtraction of polynomials. It remains to be seen whether further examples will require more sophisticated proof techniques. When the form of a specification is a single-sided arithmetic inequality, the margin by which that inequality is satisfied (e.g. 76 nsec. in specification c4]) represents a crude measurement of how conservative the circuit design is with respect to that specification. Thus CRITTER can determine not only if a design is correct (meets its specifications), but to some extent how robust it is. 287 Note that if we could do only forward propagation we could still verify the correctness of the circuit and get robustness measures for the circuit’s outputs. If we could do only back propagation we could verify correctness and get robustness measures for the circuit’s inputs. However, in order to get robustness estimates for an internal data- stream we need both its (forward progagated) behavior and all (back-propagated) specifications involving it. Ill DISCUSSION A. “In-Context” Module Descriptions It should be noted that in our example we described our latch as if the only thing it could do is extend the duration of its input However, a real latch can also be used, for instance, to select every other element from its input -data-stream. What we have done is to place an artificial restriction on the input domain of our latch, in this case requiring there to be a clock pulse for each character. This then allowed us to use a much simpler mapping for the module. This is an example of a usage-specific or in- context module description. Such simplified mappings are much easier for people to write, and they result in less simplification being needed during propagation, but what we wind up with is not really a representation of a “latch” but rather of a “latch used as duration extender”. As long as the behaviors of all input signal of the module obey the more restrictive “in-context” operating conditions, the result of forward propagation is unchanged by the simplification. However, back propagation of specifications will produce input pre-conditions that are not as weak as possible. In verifying a design that is already complete and correct, this is not necessarily bad; it just means that CRITTER may be overly conservative in estimating robustness. When using CRITTER incrementally, however, to evaluate fragments of a design in progress, over-strong pre-conditions might entail frivolous and arbitrary restrictions on possible methods for completing the design. It remains to be seen how much of a problem this is in practice. It is uncertain if the set of all common component “usages” is small enough that a library of them would be useful, or if in-context descriptions should be generated individually only as needed. We are currently working on methods to automatically generate in-context descriptions from out-of -context descriptions. B. Data Abstraction As we have seen, data-streams can be described at various levels of abstraction. In fact, the module that produces a data-stream may employ a different abstraction from the module that uses it For instance, a counter might be described as transmitting a sequence of integers while a latch which receives these numbers might consider them simply as vectors of bits. We handle this kind of translation by interposing a type-converter pseudo-module to convert the numbers to bit vectors. A pseudo-module is represented just like any other module, but it corresponds to a change in interpretation, rather than to a physical component The same propagation methods work on pseudo-modules as on real modules. C. Limitations 1. Feedback and State CRITTER still has a number of limitations. One major problem is that it cannot handle circuits with inter-module feedback. That is, it can only handle circuits in which the relation “module A uses the output of module B” induces a partial order. This is a less drastic limitation than it might appear since there are many useful and interesting circuits that have no inter-module feedback (i.e. where feedback loops exist, they are enclosed within a single module). None the less, this is a serious limitation, and we place a high priority on removing it. We plan to look at both the previous work on cycles in constraint networks (e.g. [ 1, 3, 4, 5, 61) and the work on using loop invariants in program verification C9, 101. Another limitation is that we do not deal explicitly with the internal state of a component such as a RAM or a latch. This, like internal feedback loops, can be concealed from CRITTER by choosing a suitably abstract “in-context” description for a module. We expect that once we can handle feedback, we can model internal state as an implicit feedback path, as is done with formal Finite State Automata. 2. Uncertainty Our “<f eature>=<f ormula>” notation for behaviors and mappings implies that we know a precise formula for each feature. However, for parameters like delay through a component, the parts catalogue normally gives a range of possible values. One solution to this is to embed uncertainty in these formulas by introducing slack variables to represent it, e.g. for all i > 0, START-TIME (out (i)) = (START-TIME (in (i))) + SLACK, The restrictions < 20 nsec, can tz the uncertainty, e.g. 10 carried along separately. nsec < SLACK, D. Relation to Constraint Propagation Our desire to do propagation both forward and backward and to eventually handle feedback lead us to view CRITTER as a kind of constraint propagation system. However, much previous constraint-system work c 1, 3, 4, 5, 61 has focused on problems of finding consistent solutions to complex networks of constraints, and has assumed that the individual values and constraints were quite simple. In contrast, our focus has been more like that of Stefik [: 111, in that we have put off dealing with complex networks (e.g. feedback), but have dealt with richer kinds of values and constraints. In particular: l Because we need to deal with entire time histories of signals and even arbitrary predicates on these time histories, rather than with single values or small sets of values, our individual propagation steps involve symbol substitution rather than, e.g., arithmetic operators. In fact, we really have two kinds of propagation, one for behaviors and another for specifications, but we can use the same representation of our constraint (i.e. the module function) and just apply different processes to it for the two kinds of propagation. 288 0 Since a specification really expresses a constraint among several data-streams (e.g. conditions 4-6 in Fig. 11-41, back-propagation is really the propagation of one kind of constraint (a specification) through another kind of constraint (a function). We could even express both these constraints in the same language, except that in order to do the substitution, the function mapping has to be in a <feature> = <formula> form. E. Early Experiences With CRITTER CRITTER has been used to test a fragment of a mature circuit design (1976) for a TTL-based CRT terminal video controller. About thirty specifications in all had to be satisfied, on a total of nine DATA-STREAMS. Surprisingly, it quickly discovered a potential timing anomaly that had never been noticed before in conventional testing, nor in actual use of the circuit IV CONCLUSIONS CRITTER thus embodies varied and useful kinds of reasoning about digital circuits. These reasoning abilities are useful for automatically critiquing a circuit, and should be applicable to several other tasks as well. One pressing need is to extend CRITTER to handle circuits with feedback and components with state. We also plan to implement mechanisms to handle the slack variables, and to try to apply CRITTER’s reasoning methods to other tasks, such as trouble-shooting and automatic design. V Acknowledgments The work reported here was done as part of the Digital Design Project at Rutgers University. Other members of the group, including Tom Mitchell, Pat Schooley, Jeff Shulman, and Tim Weinrich, have made signif icant contributions to the ideas and programs discussed above. Tom Mitchell and Pat Schooley also made a number of particularly helpful comments on earlier drafts of this paper. References Cl1 Borning, A. “The Programming Language Aspects of ThingLab, a Constraint-Oriented Simulation Laboratory.” ACM Transactions on Programming Languages and Systems. 3:4 October (198 1) 353- 387. [21 de Kleer, Johan and Gerald J. Sussman “Propagation of Constraints Applied to Circuit Synthesis”, Memo No. 485, M.I.T., September 1978, c31 de Kleer. Johan, Causal and Teleological Reasoning in Circuit Recognition, PhD dissertation, M.I.T., January 1979. c41 Stallman, R.M. and Sussman, G.J. “Forward Reasoning and Dependency-Directed Backtracking in a System for Conputer-Aided Circuit Analysis.” Artificial / ntel / igence. 912 October (1977) 135- 196. c51 Steele, G.L., The Definition and fmplementation of a Computer Programming language Based on Constraints, PhD dissertation, M.I.T., August 1980. L61 Steels, L. “Constraints as Consultants”, Al Memo 14, Schlumberger-Doll Research, December 198 1. c71 Waldinger, R. J., “Achieving Several Goals Simultaneously,” in Machine Intelligence 8, Elcock, E. and Michie, D., ed., Ellis Horwood, Chichester, 1977, 94- 136, Ml8 [81 Mitchell, T. , Steinberg, L., Smith, R.G., Schooley, P., Kelly, V. and Jacobs, H. “Representations for Reasoning About Digital Circuits.” Proceedings of the Seventh International Joint Conference on Artificial Intelligence. 1 ( 198 1) 343-344, I JCA17 c91 Dijkstra, Edsger W. A Discipline of Programming. Englewood Cliffs, N. J.: Prentice-Hall Inc., 1976. [: 103 Greif, Irene and Meyer, Albert R. “Specifying the Semantics of while Programs: a Tutorial and Critique of a Paper by Hoare and Lauer.” ACM Transactions on Programming Languages and Systems. 314 October (198 1) 484-507. cl 11 Stefik, M.J., Planning With Constraints, PhD dissertation, Stanford University, January 1980, CSD REPORT Stan-CS-80-784 289
1982
12
104
IDT: AN INTELLIGENT DIAGNOSTIC TOOL Hal Shubin John Wade Ulrich Knowledge Engineering Group Digital Equipment Corporation South Lawrence, MA 01843 ABSTRACT IDT is an intelligent hardware diagnostic tool that has been successfully used to identify faults in PDP 11103 computers. It selects and executes tests, and interprets the results. IDT is able to modify its test selection strategy on the basis of results of previous tests as well as opinions offered to it by the user. Symbolic formulas are used to represent the relationship between the test results and the broken equipment. If we assume that there is only one broken component, then set operations can be shown to be sufficiently general for combining the results of multiple tests. I INTRODUCTION AND OVERVIEW The diagnosis of broken computing equipment is an intellectually challenging activity requiring a technician to maintain and to reason about a large collection of related facts. These facts are obtained directly by observation or indirectly by interpretation of the results of diagnostic tests. The technician must recognize when the facts implicate an identifiable part of the computer, called a field replaceable unit (FRU). When the known facts are insufficient, the technician chooses a diagnostic test that is likely to esta- blish facts leading to the identification of the broken FRU. IDT is a tool designed to help a technician identify which FRU should be replaced to fix the broken computer. It can run in either an automatic or an interactive mode. In its automatic mode, IDT selects, runs and interprets the results of diag- nostic tests. When running interactively, IDT assumes a more subordinate role of making sugges- tions and summarizing results. IDT may move freely between these two modes of operation as the user's experience or needs dictate. In both modes, the technician is kept informed of the progress of the diagnosis by means of a display which is updated as information is gathered. Previous diagnostic tools (eg, Digital Equipment Corp.'s Techmate 131) were unable to dynamically alter their test selection strategies. These systems used a predetermined testing order that guaranteed that the preconditions of each diag- nostic test were satisfied before the test was run. From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. under test (UUT), are involljed. tie is remote and one is local to the UUT. The remote. computer, a VAX 11/780, contains the knowledge base, reasoning mechanisms and testing strategies. The local computer, a PDP 11/03, contains the diagnostic test series and the display software. The two computers communicate over a 4800-baud telephone line. f i Use* Figure 1: Current Configuration of IDT The user initiates a diagnostic session by powering up the 11/03, which then telephones the 11/780 and logs in to a special account, initiates the user’s display and then retires to a passive role. Future activities of the diagnostic process are controlled by the 1 l/780 with the 11/03 passing messages, loading and running tests when told to do so 9 and managing the display. Figure 2 shows the beginning of a diagnostic session. The screen is divided into three regions. The top region is a menu from which the user may select the next operation to be performed by IDT. The middle section contains a description of the current activity of IDT. The bottom section dis- plays the state of the diagnostic process. After the final menu item is the instruction “enter choice”. This informs the user that he may now make a menu selection. The user has requested menu option 2 which allows him to tell IDT which piece of equipment he thinks may be broken. In Figure 3, the system has responded to the user’s request to enter an opinion by displaying the message “enter opinion”. The user responded by typing “(read-write)“. By typing “(read-write)” the user has indicated that he believes that the read-write board may be faulty and that IDT should try to run tests that will verify this opinion. The user could have typed “(not read-write)” to indicate that IDT should base its testing strategy on the opinion that the read-write module is not broken. Combinations of opinions can also be accepted. In addition to the read-write unit, there are four other FRUs in the RX02 subsystem. They are: interface’ controller, disk-drive (the mechanical portion of the sub-system) and diskette (any floppy disk that is in the drive being tested). The com- plete list of the FRUs is displayed in the lower third of the screen. Associated with each FRU is a Set of atomic “functional” units (AUs). These atomic units constitute the parts of the FRUs that can be tested by the diagnostic tests. The diskette and disk-drive have only one functional subpart, namely themselves. IDT I?i.a@ostics 1. SELECT a test 5. Display HELP infGUXitiG~ 2. Enter an OPINID~ 6. STOP ttV5 run 3, Allow Program to CHOOSE 4. ShGW STATUS Initiafizin$ tables. Qeadq tG Start naw , Demo&ration of the XT project being deveioped in #E ES srcup at 3igital EquiPRent CorP., in Tewksbury , MR. m.i A Atmic Units suspected tempt9 => NOT suspected.) rnterface rnr,‘l rnt2 snt3 in7.4 lnt5 controiier Ctii Cti2 ct!3 Ct14 Cti5 Ct18 Cti7 read-write rwi m2 rw3 rw4 diskette alskette disk-drive d&.-drive Figure 2: IDT’s Initial Display 4, Show STATUS 3.I. (reabrritef AddinS your opinion to the tables. jelectrns: a zest oasea on your aprnlon. Yom- $inim caused test 17 to be selected. Analyrins the result o? the test. Test !7 farlea. Yathing to repwt &out: inter-face, read-!:rite, coFitro!ler The tests nave e!iminatea: alskette, al%-ar:ve Fiid Atomic kits suspected lenpt9 =Zz. KT suspected.) lnteriace rnt2 CWitrGl ier Ctll Cti2 Ct13 Ctl4 Ct.!5 Ct 6 Zt17 1 road-write .- rul ru2 rw3 rw4 dlsicet te disk-drite Figure 3: User’s Opinion Processed As a result of giving IDT the opinion that the read-write module is broken, IDT has selected and run test 17. After analyzing the results of a test, IDT removes from the display the AUs that have been demonstrated to function properly. In this case they are intl, intj, int4, int5, diskette, and disk-drive. The fact that the diskette and disk- drive have no more untested atomic units associated with them implies that they are functioning correctly; that is, they contain no AUs that are suspected. Notice that the interface has almost been eliminated from suspicion but that it still contains one atomic unit that might be broken. 291 In Figure 4, the user has indicated that IDT should select and run another diagnostic test. The middle section of the display indicates that IDT has selected and run test 24. In this case, the choice of test 24 was based on the user's opinion, and the result of test 17. As a result of the out- come of test 24, all atomic units except ones associated with the read-write unit have been iden- tified as working correctly. Since the only atomic units that may be broken belong to the read-write unit, it is the read-write module that must be replaced. Figure 4: Diagnosis Complete There are several user options that have not been illustrated by the above example. The options whose meanings are not obvious are 1, and 4. Option 1 is probably only useful for users that are very familiar with the set of diagnostic tests used by IDT, for it allows the user to specify that a particular test should be run next. Option 4 causes IDT to display all previous test results and opinions. III FORMULATION OF THE DIAGNOSTIC PROCESS -- IDT must perform two basic functions: - It must analyze the results of the tests to determine which FRU should be replaced. - It must select diagnostic tests from a set of tests. Test selection is based on the know- ledge acquired from previous tests and from opinions entered by the user. To accomplish these functions IDT has: - a method for interpreting the results of running diagnostic tests - a method for reasoning about interpreted test results - a strategy for selecting the testing order. The remainder of this section is divided into three parts, each devoted to one of the above topics. A. INTERPRETATION OF TEST RESULTS To interpret test results, an adequate model of the unit under test must be developed. This is done by studying the diagnostic tests and the block diagrams of the equipment to determine the FRUs of the unit under test and the atomic units of each FRU. For our purposes atomic units are the smallest subpart of the UUT that can be distin- guished from other parts by the set of diagnostic tests. An AU may be any part of the UUT (eg, a motor, a chip, circuitry, cabling). We require that no AU belong to more than one FRU. Without this requirement, it would sometimes be impossible to distinguish between two FRUs. If the diagnostic tests exist before the model is created, then the atomic units are derived by exam- ining the diagnostics. If diagnostic tests do not already exist, then atomic units may be selected first and diagnostic tests then written to disting- uish among them. In the latter case, care must be taken to pick AUs of appropriate size. If the atoms are too large, then the test programs will be overly complex. If the atoms are too small, the diagnostic tests required will be too numerous and difficult to interpret. Knowledge about atomic units is represented by logical formulas. The set of atomic formulas is formed by associating with each atomic unit an identifying symbol. Each such symbol is inter- preted as asserting that the associated atomic unit is broken. Non-atomic formulas are formed by combining atomic formulas with the usual set of logical connectives. For example, if A and B are atomic formulas associated with the atomic units x and y, then the formula (AV B) is taken to mean that either unit x or unit y is broken. To facilitate reasoning about test results, it is useful to require that a test have only two pos- sible outcomes, called trpass" and "fail". If x is a test, then we associate with x two formulas, PASS(x) and FAIL(x). These formulas represent the knowledge obtained when test x is run. Each of the formulas used to represent test results has the form P => R where P is a statement called the precondition of the test, and R is a formula called the result. The formula P repre- sents the condition that must be true in order that the test result be interpreted correctly, and the formula R represents the condition that is implied by the interpreted outcome of the test. The advantage of this interpretation of test results is that it permits tests to be run without first determining whether the preconditions are satisfied. This is important since the precon- ditions may often be verified indirectly by combinations of other test results. Also, precon- 292 ditions of tests may be used to guide the testing strategy. For instance, suppose that a particular test has the following two formulas associated with it: 1A => C V D (test failed) B =>7C (test passed) (Remember that "A" means that A is broken and rrqA" means that A is not broken). The work required to verify (4A) before running the test may be wasted in the case that the test passes. If instead we run the test with no concern for the preconditions we may find out whether we wish to verify (-A) or B. Thus, the yet-unverified preconditions of a test result may be used to guide further testing. The formulas obtained from running tests are combined to form statements about FRUs. Initially each FRU, say x, is mapped onto a pair of formulas: INSIDE(x) and OUTSIDE(x). These formulas represent the atomic units that belong to x and those that belong to FRUs other than x, respectively. As tests are run, the formulas representing the results are conjoined with all of the other formulas in the system and the resulting formulas are simplified. Thus, results from tests affect what is known about the broken equipment as well as what information will be obtained from running new tests. Outcomes of tests become predictable on the basis of previously run tests if their outcome formulas simplify to true or false. It is obviously unnecessary to run such tests since their results can add no new information to the system. When INSIDE(y) simplifies to truth, then the FRU y is known to contain a broken atomic unit. B. REASONING ABOUT TEST RESULTS If we interpret an FRU as a set of atomic units, then the goal of the diagnostic process is to prove a formula of the form: A,V A2v . . . V An where the set {A, A2 . . . A 1 is a subset of the atomic units on a single FfiU. In general, such a proof requires propositional theorem proving about formulas that could contain thousands of atoms. To reduce the complexity of the problem, we make the extra-logical assumption that there is only one broken atomic unit. This f'single-fault assumption" is frequently made by technicians when diagnosing broken equipment, and has been used by other diag- nostic projects [4, 51. In terms of the logical formulas described above, the single fault assumption takes the following form: Assumption: formula. There is exactly one true atomic In addition to the single fault assumption, there is another characteristic of our formulas that we now make explicit: Assumption: There is a finite set of atomic formulas. It should be obvious that this assumption applies in our situation since there are a fixed finite number of atomic units. The following theorems show that these assumptions greatly reduce the complexity of the theorem-proving problem. Theorem 1 shows that each of our formulas can be represented as a set of positive literal formulas. Theorem 2 shows that set intersection is adequate for conjoining formulas represented as sets of positive literals. Theorems 2 through 5 form the basis for an algorithm that will convert arbitrary formulas into their representative set of literals. In each of the theorems we assume that there are n possible atomic formulas and that exactly one atomic formula is true. Theorem 1: If E is a formula, then there exists a disjunctive formula containing only positive literals (un-negated atoms) that is equivalent to E. Proof: Suppose that E(L . ..L ) is a formula where L . ..L is the set of a 1 (dot a!?1 l't i po%tive literals. 1 erals may actually occur in the formula E(L . ..L ).) Let E(L.=t) denote the result of sJbsti!uting in thi formula E(L . ..L.) the logical values t for L., and f for L.'for each j + i. By the singleifault assumpt$on, the formula (L,V . . . \/ L ) is identically true. Therefore the followiRg formulas are each equivalent: 1. E(L,...L,) 2. E(L,... LJ A (L, v . . . vLn) 3. (E(L,... L,)h L,) \J . . . L/(E(L,...Ln) RLn) 4. WL,=t)A L,)V . . . WE(Ln=t) A LJ Since E(L.=t) evaluates to t or f for each i, formula 4 ii equivalent to a disjunctive formula containing only positive literals. Clearly, each formula, say E, can be represented by a set of positive literals (namely those positive literals in the disjunctive formula equivalent to E). We use the notation IEI to denote this set of literals. Theorem 2: If A and B are formulas then I(AAB)I = ;A: A IB; Proof: Let A and B be two formulas and let IA! = Ia . . . a 1 and IBI = Ib b 1. Then from thec!rem 1 p"lus repeated aAp;idat!on of the distributive laws we obtain I(Ah B)I = I(a,h b,)V . . . \I(aiA b.) . . . (a /\ b )1. But by the single fault assumpl!ion, (a.nA b.r = f if a. # b. and (a.h b.) q c if a. = 6. = d. FromlthisJit follo& that c belong; to !(AAB): 293 iff c belongs to IAI n IBI . Theorem 3: If A and B are formulas then :(AV B): = :A: U IBI. Proof: By the associative law. Theorem 4: If A is a formula and U the entire finite set of atomic formulas, then :?A; = U - :A;. Proof: let IA! = {a, . . . a,}. Then by theorem 1, and DeMorgan's law ?A = (la,h . . . r\qarn). But the single fault assumption implies (-a,A . ..Aqam) = (b,v . ..\r b > where {b . . . bk} = U - {a, . . . a,}. Th,k result fo lows. 1 Theorem 5: If A is an atomic formula then I A I = lA] Proof: Obvious. If the equalities in theorems 2 through 5 are interpreted as rewrite rules, they can be iteratively applied to convert arbitrary formulas into representative sets of positive literals. Since the computational complexity of the set operations is n squared, the computational cost of such a conversion is reasonably small. Also, since set intersection is the only operation necessary for combining test results, the cost of integrating the results of tests is at worst n squared. If two faults can occur, we can transform all formulas into another set of equivalent formulas whose atoms are chosen from the cross product of the original set of atcmic formulas. If, in this transformed set of formulas, the atomic unit x corresponds to (y, z> in the original, we choose the formulas so that x is true if and only if y and z are true in the original. The single fault assumption now holds for this new set of formulas. This process can be iterated for greater numbers of possible faults. If k is the number of possible faults, theE she cost of combining test results grows as (n > where n is the number of atomic units. C. TEST SELECTION In addition to reasoning about test results, IDT can select and initiate new tests. To acccmpl i sh this, IDT uses statistical information that predicts the likelihood that a particular AU will be broken, opinions offered to it by the user, and results of previous tests. To process such information, IDT maps each test, say x, onto two formulas, PASS*(x) and FAIL*(x). Initially, PASS*(x) and FAIL*(x) are identical to the functions PASS(x) and FAIL(x). As the user gives opinions in terms of logical statements about FRUs (see the example in Section 21, these statements are mapped via the functions INSIDE and OUTSIDE onto statements about atomic units. The resulting formulas are then conjoined with each PASS* and FAIL* formula. It is possible that both PASS*(x) and FAIL*(x) are false. This would indicate an inconsistency in the opinions presented to IDT or a violation of the single fault assumption. Intuitively, the values of PASS*(x) and FAIL*(x) represent the information that the IDT thinks will be obtained when x is run. For instance, if FAIL*(x) = false, then IDT thinks that the test will not fail since this would imply something it thinks is impossible. In this case, no infor- mation could be obtained from running test x. In case the user does not wish to offer an opinion, FAIL* and PASS* remain equal to FAIL and PASS respectively. By combining the set of objects identified by FAIL*(x) and PASS*(x) with the statistical measure of the probabilities that each AU will fail, we get a measure of the probability of each test failing and passing. We can associate with each test x, the expected size of the smallest IOUTSIDE( where y ranges over the set of FRUs. Call this size the preference number of x. The test selection criterion is then: Pick the test with the smallest preference number. This test selection strategy picks tests that drive OUTSIDE(y) to false for some y. Intuitively, the justification for this strategy should be obvious; when OUTSIDE(y) = false, then y is known to be broken. Because of lack of statistical data, all experimentation with IDT has been made with the assumption that all failures are equally likely. HOwever, even with this assumption, the test selection strategy appears to focus quickly on the most likely broken FRU when guided by appropriate user opinions. When guided by incorrect opinions, however, the system can be temporarily mislead. IV SUMMARY We have described an intelligent diagnostic tool (IDT) which is demonstrably useful in tracking down bugs in broken computers. It has the following properties: - IDT uses a rather high-level model of the equipment to be tested. - If IDT is used with an existing set of diagnostic programs, then the model can be derived from the diagnostics. Otherwise a model must be specified and a set of diagnostic tests created. - IDT can analyze the results of running diagnostic tests and can select new tests to run based on information provided by previous test results. 294 - IDT can also accept and process opinions offered to it by a technician. These opinions guide the diagnostic process along lines thought to be promising by the technician. - IDT may use statistical information to guide its testing strategy so that the diagnostic process will be biased toward frequently occurring problems. V REFERENCES Cl1 Foderaro, J.K., "The FRANZ LISP Manual", University of California, 1980. [21 Forgy, C.L., "OPS5 User's Manual" 9 Carnegie- Mellon University, 1981. 131 Kiskiel, M., et. al. "Techmate Functional Specification11, Digital Equipment Corp., 1981. [43 Tendolkar, N.N., and R.L. Swann, "Automated Diagnostic Methodology for the IBM 3081 Processor Complex", IBM Journal of Research and --- Development, 26:1 (19821, 78-88. [51 Davis, K.R., et. al., "Diagnosis Based on Structure and Functionll, Proceedings of the -- 1982 National Conference on Artificial Intelligence, 1982. - 295
1982
13
105
REACTOR: AN EXPERT SYSTEM FOR DIAGNOSIS AND TREATMENT OF NUCLEAR REACTOR ACCIDENTS William R. Nelson EG&G Idaho Inc. P.O. Box 1625 Idaho Falls, ID 83415 ABSTRACT REACTOR is an expert system under development at EG&G Idaho, Inc., that will assist operators in the diagnosis and treatment of nuclear reactor accidents. This paper covers the background of the nuclear industry and why expert system tech- nology may prove valuable in the reactor control room. Some of the basic features of the REACTOR system are discussed, and future plans for vali- dation and evaluation of REACTOR are presented. The concept of using both event-oriented and function-oriented strategies for accident diagno- sis is discussed. The response tree concept for representing expert knowledge is also introduced. I BACKGROUND The responsibilities of an operating crew of a commercial nuclear reactor can be compared with those of a medical doctor. During normal opera- tion, little care is required to monitor and main- tain the reactor. When an emergency occurs, however, quick and efficient diagnosis and treat- ment of the problem is essential. If the diagno- sis and treatment are effective, most incidents can be terminated without serious consequences. However, if the diagnosis is incorrect or the treatment improper, the consequences could be severe. A commercial nuclear power plant is a complex combination of systems. There are two types of reactors in commercial service--the Pressurized Water Reactor (PWR) and the Boiling Water Reactor (BWR). This paper discusses the PWR. The reactor core itself is contained in a large reactor vessel (see Figure 1). Control rods provide one mechan- ism for controlling the rate of the nuclear reac- tion. Heat is removed by the Primary Coolant System (PCS), which circulates water through the reactor vessel. Sufficient pressure to prevent boiling in the PCS is provided by the pressurizer. If a pipe break occurs in the primary coolant system (the so-called LOCA or Loss of Coolant Accident), the reactor is automatically shut down and the Emergency Core Cooling System (ECCS) pro- vides cooling water to the reactor core. Contin- ued cooling is required after shutdown to remove radioactive decay heat. Any radioactive materials which escape the PCS are contained in the Containment Building. r Control rods Borate-d water storage Tank (BWST) System (ECCS) Turbme-generator Condenser 9-J Feedwater Pump :ontamment ulding INEL 2 le.64 Figure 1. Schematic of a pressurized water reactor facility. Heat is removed from the primary coolant system by the Secondary Coolant System (SCS). Water is turned to steam in the steam generator. Then the steam flows to the turbine-generator assembly where electrical energy is produced. The steam is then condensed back to the liquid state and returned to the steam generator by the feedwater pump. The accident at the Three Mile Island (TMI) nuclear power plant in 1979 demonstrated that the information resources available to the reactor operator were not adequate to meet his decision making needs under all circumstances. For example, during the first few minutes of the TM1 event, at least 100 annunciators alarmed in the control room. In the face of this extreme information overload, the TM1 operators misinter- preted the situation and shut off the ECCS. As a result, the nuclear fuel was severely damaged. Since the TM1 accident, many groups have recommended changes that would assist the operator in diagnosing and responding to reactor acci- dents [1,2]. In particular, many recommend that computerized decision aids be developed. These aids would help the operator integrate the large amount of information in the control room and interpret its significance. From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. The Augmented Operator Capability (AOC) Pro- gram was initiated by EG&G Idaho in 1979 to help provide an improved, decision making environment for the reactor operator. The program has been funded by the Nuclear Regulatory Commission (NRC). One of the goals of the AOC program has been to develop and evaluate advanced decision aids for reactor operators. An early effort involved the application of the response tree technique [3,41 as a computerized decision aid for reactor opera- tors. It soon became apparent [5] that the knowledge based expert system [6,7] would provide a powerful tool for reactor operators, using response trees to represent some of the expert knowledge required. The integration of color graphic displays, response trees, and expert sys- tem technology has resulted in REACTOR, an expert system for the diagnosis and treatment of nuclear reactor accidents. II REACTOR The purpose of REAC'IOR is to monitor a nuc- lear reactor facility, detect deviations from normal operating conditions, determine the signi- ficance of the situation, and recommend an appro- priate response. It performs these tasks by operating on a large knowledge base with a proce- dure that reasons both forward and backward. The reasoning process is quite basic, having been adapted from Winston and Horn's animal identifica- tion system. 8 The system reasons forward from known facts until a conclusion can be reached. If not enough information is available to reach a conclusion, the system reasons backward to determine what information it needs to know. REACTOR will then query plant instruments or the operator in order to fill the gaps in its knowledge. REACTOR's knowledge base contains two types of knowledge: function-oriented knowledge and event-oriented knowledge. Function-oriented know- ledge concerns the configuration of the reactor system and how its components work together to perform a given function. Event-oriented knowledge describes the expected behavior of the reactor under known accident conditions. Event- oriented knowledge has been gathered from past experience with actual accidents, experiments in test reactors, and analysis of computer simulation models. Event-oriented knowledge is useful for identifying an accident which fits the pattern of pre-analyzed events. However, when an event occurs which does not match an expected pattern, "mindset" can occur and operators may ignore rele- vant information in an attempt to confirm their assumed diagnosis. In such a situation it is very important to consider function-oriented information so that all relevant facts are given adequate consideration. REACTOR's event-oriented knowledge is con- tained in a series of IF-THEN rules. A sample of the IF-THEN rules is shown in Figure 2. The same rules are shown in an AND/OR tree in Figure 3. Notice that REACTOR knows about four different accidents: loss of feedwater, steam line break, steam generator tube rupture, and LOCA. Figure 4 ((RULE 1 (IF (PCS PRESSURE DECREASING) (HPIS ON)) (THEN (PCs INTEGRITY CHALLENGED))) (RULE 2 (IF (PCS TEMPERATURE INCREASING)) (THEN (PCS-SCS HEAT TRANSFER INADEQUATE))) (RULE 3 (IF (SG LEVEL DECREASING)) (THEN (SG INVENTORY INADEQUATE))) (RULE 4 (IF (HIGH CONTAINMENT RADIATION) (HIGH CONTAINMENT PRESSURE)) (THEN (CONTAINMENT INTEGRITY CHALLENGED))) (RULE 5 (IF (PCS-SCS HEAT TRANSFER INADEQUATE) (LOW FEEDWATER FLOW)) (THEN (ACCIDENT Is Loss 0F mmwAmR))) (RULE 6 (IF (SG INVENTORY INADEQUATE) (LOW FEEDWATER FLOW)) (THEN (ACCIDENT Is Loss OF FEEDWATER))) (RULE 7 (IF (PCS INTEGRITY CHALLENGED) (CONTAINMENT INTEGRITY CHALLENGED)) (THEN (ACCIDENT Is LocA))) (RULE 8 (TF (PCS INTEGRITY CHALLENGED) (SG l>FVEL INCREASING)) (THEN (ACCIDENT Is STEAM GENERATOR TUBE RUPTURE))) (RULE 9 (IF (SG INVENTORY INADEQUATE) (HIGH STEAM FLOW)) THEN (ACCIDENT IS STEAM LINE BREAK)))) Figure 2. Event-oriented IF-THEN rules. PCS. scs heat transfer Inadequate SG mventory Inadequate PCS Conlatnment Integrity mtegnty challenged challenged Rule 2 PCS temperature l” C” ?aSl” g Rule 3 SG level decreasing PCS pressure decreasmg HPIS 0” Figure 3. AND/OR tree. 297 (DIAGNOSE) (IS THIS TRUE? PCS TEMPERATURE INCREASING) ?NO (IS THIS TRUE? SG LEVEL DECREASING) ?NO (IS THIS TRUE? PCS PRESSURE DECREASING) ?YES (IS THIS TRUE? HPIS ON) ?YES (RULE 1 DEDUCES PCS INTEGRITY CHALLENGED) (IS THIS TRUE? HIGH CONTAINMENT RADIATION) ?NO (IS THIS TRUE? SG LEVEL INCREASING) ?YES (RULE 8 DEDUCES ACCIDENT Is STEAM GENERATOR TUBE RUPTURE) (ACCIDENT IS STEAM GENERATOR TUBE RUPTURE) Figure 4. Diagnosis of steam generator tube rupture. Borated Water LPIS B Storage Tank HX (BWST) (heat exchanger) shows a sample dialogue between a reactor operator and REACTOR to diagnose a steam generator tube rupture. If an accident cannot be diagnosed using the event-oriented approach, it is then necessary to use the function-oriented strategy. The function- oriented capabilities of REACTOR are handled by the response tree technique. A response tree is a diagram which shows the success paths which can be used to provide a given safety function. A safety function9 is a group of actions which prevents damage to the reactor core or release of radioactivity outside the containment building. Each specific set of actions provides a safety function which is called a success path. Because of the redundancy of nuclear plant systems, each safety function can be provided by more than one success path. When component failures cause success paths to be unavailable, another success path can be implemented automatically or by operator action. Figure 5 is a piping diagram of the Low Pressure Injection System (LPIS), which is part of the emergency core cooling system. The system shown is from the Loss of Fluid Test (LOFT) reac- tor, a test reactor designed to represent a com- mercial PWR. The LPIS is used to provide the core cooling safety function during an emergency. Figure 6 is a response tree which shows the success paths which can be used to provide core cooling using components of the LPIS. Each path Downcomer Automatic Blowdown loop LPIS A pump Kat exchanger) injection Blowdown hot leg Blowdown Suppression Tank (BST) INEL 2 1663 Figure 5. Schematic of Low Pressure Injection System (LPIS). 298 from the bottom of the tree to the top represents data directly from plant instruments. However, a different success path. Each success path has it can also query the operator for further infor- been assigned a mode number which reflects its mation. The operator can modify the knowledge relative effectiveness in providing core cooling. base either by answering REACTOR's questions or The lowest mode numbers are the most desirable. by directly modifying the knowledge base or rule When an accident disables LPIS components, the structure. At present, REACTOR monitors a reactor success path with the lowest mode number is chosen simulation rather than an actual nuclear plant. from those which are still available. This allows REACTOR to be exercised with a wider range of events than would be possible using an actual nuclear facility. Figure 6 shows how REACTOR would evaluate the response tree for an event which disables LPIS Pump A, the downcomer injection point, and the Borated Water Storage Tank (BWST). REACTOR determines which success paths are not available (as indicated by shading) and selects path number 24B for implementation. This success path is displayed schematically to the operator as shown in Figure 7. III STATUS AND PLANS P-P To exercise the response tree logic, REACTOR has access to a large knowledge base concerning the LPIS. A property list for each component contains information such as its electrical power suPPlY* The response tree structure itself is also contained in the knowledge base. Each success path has a property list of the elements which make up the success path. Each element of a success path has a property list of the components it contains. We are currently in the process of integrat- ing the elements of the REACTOR system. The response tree technique was developed in 1978 and implemented in the procedures of the LOFT facil- ity. In 1980 work was begun to implement the response tree approach as a computerized decision aid for reactor operators. Implementation of the event-oriented portions of REACTOR's knowledge base was begun in early 1982 when a LISP inter- preter became available. A LISP program for evaluating response trees was written in April 1982. Our current efforts emphasize the develop- ment of a more complete set of event-oriented IF-THEN rules and completion of a Fortran program which produces color graphic displays for the operator. Figure 8 shows the overall structure of the REACTOR system. Normally, REACTOR will receive Future work will include trials of the REACTOR system using a realistic operating Injection point Route Pump Water source Heat sink Mode number I Normal B I Auto. crosstie Loher Cdld plenum leg I I I I 1 Normal Auto. Normal Auto. A crosstie A crosstie I LPIS B I LPIS B IHXBIHXB IHXAIHXA IHXBIHXB C.’ : EL:! ..... :%ji:: 36 40 INEL 2 1661 Figure 6. Response tree for LOFT LPIS. 299 environment for the reactor operator. A prime concern is to improve the efficiency of REACTOR's rule manipulation process so that a conclusion can be reached quickly without taxing the operator's patience. If simulation trials go well, we hope to apply REACTOR in an actual nuclear plant. IV CONCLUSIONS Application of a knowledge based expert sys- tem to nuclear reactor operations represents a significant new application of artificial intelli- gence methods. Although expert systems are beginning to demonstrate their value in a number of industrial settings, the nuclear application is unique in many ways. For instance, the nature of the decision process (on-line process control) is different from the usual expert system appli- cation. REACTOR will receive information directly from an operating plant. Decisions concerning plant operations must be made within adequate time to allow an effective action to be taken. Also, operation of a nuclear power plant requires the effective integration of a huge amount of information. Because of the safety requirements for reactor systems, the number of components to be monitored is larger than many other process control situations. Certain features of REACTOR may prove valu- able to developers of expert systems in other fields. The integration of event-oriented and function-oriented diagnostic strategies provides a powerful combination for handling emergency situa- tions. In many fault diagnosis problems, it is difficult to determine the exact cause of the problem. When this occurs, it is helpful to be able to use function-oriented techniques to deal with the situation. This approach could prove beneficial in other fields such as medical diagnosis. The response tree seems to be a useful tool for representing knowledge in fault diagnosis tasks. By representing all the paths available to provide a function and then prioritizing each path, it is possible to embed a large number of IF-THEN rules in the structure of the response tree itself. This technique might also be useful for other expert system applications. The safety implications of nuclear reactor operations also mandate very severe requirement for accuracy in an expert system. A nuclear reac- tor accident could disrupt the lives of a large number of people. Therefore, the decisions reached by an expert system must receive very In summary, REACTOR is an effort to apply expert system technology to the operation of nuc- lear power plants. Although the development of REACTOR is not yet complete, it seems that expert systems could prove significant for helping nuc- lear reactor operators cope with the complexities of accidents. Further development and trials of REACTOR will be necessary to fulfill this potential. Borated Water Storage Tank (BWST) y LPIS B Eat exchanger) I I IIIIIIIIIll ------ Selected success path Unavailable components Available components IllllI~IIIIIIIlIIIIIIllllll Ill X Downcomer LPIS B - injection pump z - ---MB-~ F-----M MB-- LPIS i Primary Blowdown I automatic I crosstie I coolant loop ; loop i II Ed &--2 careful review by the human operator. For the foreseeable future, the human operator will always have the final- responsibility for control room decisions. f/yw--- -4 Et LPIS A HX Lower plenum injection INEL 2 1662 Figure 7. Display showing selected success path. 300 I 1 Display - program 0 Keyboard 1 (Fortran) 1 1 - model 1 Response tree logic (LISP) Figure 8. Structure of REACTOR. REFERENCES INEL 2 1665 [l] Kemeny, J. G., et al., The Need for Change: The Legacy of TMI, Report of the President's Commission on the Accident at Three Mile Island, October 1979. [2] Rogovin, M., et al., Three Mile Island: A Report to the Commissioners and to the Public, Nuclear Regulatory Commission Special Inquiry Group. [3] Nelson, W. R., "Response Trees for Emergency Operator Action at the LOFT Facilitv." ANS/ENS Topical Meeting on Thermal Reactor Safety, Knoxville, TN, April 7-11, 1980. [4] Nelson, W. R., Response Trees for Detection, Diagnosis, and Treatment of Emergency [51 [61 [71 [81 191 Conditions at the LOFT Facility, Master's Thesis, University of Washington, 1980. Nelson, W. R., "Decision Making in the Reactor Control Room," ANS/ENS Topical Meeting on Probabilistic Risk Assessment, Port Chester, N.Y., September 20-24, 1981. Duda, R. O., and Gaschnig, J. G., "Knowledge-Based Expert Systems Come of Age," Byte, September 1981. Webster, R., and Miner, L., "Expert Systems: Programming Problem-Solving," Technology, January/February 1982. Winston, P. H., and Horn, B. K. P., LISP, Reading, Massachusetts: Addison-Wesley Publishing Company, 1981. Corcoran, W. R., et al., "Nuclear Power-Plant Safety Functions," Nuclear Safety, March/April 1981. This report was NOTICE prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, or any of their employees, makes any warranty, expressed or implied, or assumes any legal liability or responsibility for any third party's use, or the results of such use, of any information, apparatus, product or process disclosed in this report, or represents that its use by such third party would not infringe privately owned rights. The views expressed in this paper are not necessarily those of the U.S. Nuclear Regulatory Commission. Work supported by the U.S. Nuclear Regulatory Commission, Office of Nuclear Regulatory Research under DOE Contract No. DE-AC07-76ID01570. 301
1982
14
106
A CSA I"lkDEL-BASLD N1;CLkAT; PUWEl\ PLAN? CbNNSLL?,AN'I' W.E. UnderL"ood School of Information aI;d Computer Science Georgia Institute of lechnology Atlanta, GA ABSTRACT ?,n experimental computer-based nuclear pober plant consultant is described. The irlference procedures interpret observations of a particular plant situation in terms of a commonsense algorithm network model tllat characterizes the normal and abnormal events of a pres- surized water reactor plant. "Illis paper discusses the knotiledge and control struc- tures, and illustrates the 0Feration of tile system \,ith situations from tl1e accident at lhree plile Island. 1NTHOGUCTION The concern for nuclear FoweL plant safety 011 the part of the Nuclear Po~~K industry, its Eederal Regulators arid tile general public was increased by t1ie accident at Three b;ile Island. Among the K-ccommendations of two committees [1,3] that studied that accident were tllat there be: new NI;C reQuiremerIts to ensur e that qualified engineer supervisors L,itll intimate knowledge of the pl.?nt be a part of the onsite supervisory management chain on every reactor ol,erating shift, nt. w mechanisms to incorporate operating experience into regulatory programs, and new programs ' for improved operator training. A computer-b;sed consultant could sukport the implementation of some of these recommerldations. N;;rnEl y , 7. knohvledge base of nuclear Lower planUt oteratioL, pirocedu1 es, and experience coupled with an automatic diagnostic capability might provide onsite exprK-t advice. A committee of nuclear ~;ow~r plant experts could ubdc<te tile knobledge base on the basis oi new experience. Coupled with a tutorial sys ten1 the knowledge base might also be used for operator training. iYbCl1 of the knowledge required to diagnose nuclear poker plant &roblems involves knowledge of the pliysical mechanisms of the nuclear power plant itself as well as its normal and emeKgt:ncy operator procedures. Commonsense Algorithms (CSAS) [2] llavt: been useo t0 model physical mechanisms but there llave bren no expert systems specifically designed to use CSA's as a knowledge base. Iht Furpose of the KeseaKch described in this paper is to investigate tile use of commonsense algorithms arid expert systems technology in representing knowledge of nuclear power plants for use in problem diagnosis and intervention. In tile next section of tllis paper cori~mor~set~se algoritllms are reviewed and illustrated with a C5A model of a nuclear powee plant subsystem. Then the inference rules and control strategy of a p1-oLotypr computer-based consultant that uses the knowltdge base is described. Next a sam- ple consultation involving tlie TlireC piile Island accident is sketched. binally, the preliminary results are summari zed atld otller factors in tilt: ongoing development of the system are discussed. CGI"ifriONSENSE ALGGHITEtlvl NETWCHK FIODELS lhe Commonsense Algorithm (CSA) representation has bten ustd for dtiscrib- ing pllysicai mechanisms [2lf and as ttc basis for problem solvirig and lallguaye comprehension systems. lhe CSA rtpresentation fol. plllsical mtcl;anisms consists of 4 event types and 9 relations. 11ie events ale actions (A) I tendencies (T) , states (S), and statechanges (SC). The 9 relations (or links) are oriesl~ot causality (OS-CAUSE), continuous caclsality (C-CAUbE) , repetitivr causality (R-Chl;SjE), state coupling (S-COUPLE), equivalence (ECljI\/), aritbgonism (AN1'AG) , enablement (C-ENAbLE), threshold (TihRESH) , and fate confluence (kA?E--CI/NhL). The flKSt 5 types of relation can be "gated" by con- ditions that must hold for. the causal relationship to continue to hold. Some of tile events and relations t3Ktt illustrated in kig. 1. In the explana- tion tllat follows the verbal description is cross-rtferenced with the event (En). TllL pilot-operated relief valve (PCHb) could be explained in English as follows: 302 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. If there is a positive statechange in vapor pressure (El) that tf-lresholds at 2200 psi (E2), then the pressure switch closes (E3). As long as the solenoid wires are intact (E4), there will be EMF in the solenoid (E5) that bill enable the tendency magnetism (E7) to continuously cause the solenoid plunger to be out (E9) provided it is not stuck in a closed posi- tion (E8). because of a mechanical link to the PCiRV, the PORV will be open (Eli) provided it is not stuck in a closed posi- tion (ElO). This is equivalent to a negative statechange in vapor pressure (E16) and coolant in the reactor coolant system (El71 provided the block valve is open (E12). When the negative statechange in vapor pressure thresholds at 2100 psi (E18), the pressure switch will open (E19) that there is no ZElenoid (E20) longer EhF' in the , and the solenoid pl uriy e r will retract (E22), closing the POkV (E23). Note that both normal and abnoLma1 states and processes are represented in the network. No claim is made that the CSA representation for this exariiple is complete nor that all the events are at the same level of description. The CSA network for the PWH po!ieL plant currently consists of a model of the primary coolant system including the coolant pumps , reactor, pressurizer (a part of which is the PORV previously described), steam generator, and emergency core cooling system. Events which are symptomatic of system problems and which cause alarms and automatic control actions are also represented in the model. Cperator actions in rest-onse to al;llms are represented as sequential commonsense algorithms. Diagnostic rules are also represented in the CSA network. For example, the tliree states "containmrnt pressure greater than normal", "radiation level greater than normal", and "recirculation sump level greater than normal" are all continuously caused by the accident state "loss of reactor coolants which hould be caused by any rlumber of abnormal states. CONTROL STRATEGY A consultation usually begins with a human operator requesting diagnosis of the cause of some abnormal event such as a negative statechange in pressurizer pres- sure, a positive statechange in containment pressure, or a containment temF,erature greater than normal. Tile consultatiorl may also begin r"ith a request for diagnosis of the cause of some automatic action of the control system i n response to the previously mentioned (NAlrlE P(JRV-SYS?E~) (EVENTS (1 SC (PkLSSURE VAPGR X 22OU PUS)) (2 s (PRESSURE VAPOR 2200 POS)) (3 s (PhLSSURE SWITCH CLOSED)) (4 S (SOLENOID WIRE INTACT)) (5 S (EhF IN SCLENOID)) (6 S (INDICATOR LIGHT ON)) (7 T (~IAGNE?IS~)) (8 S (SOLENOIC PLUNGER NC? STUCK)) (9 S (SGLENOID PLUNGE,R OU'i)) (10 S (POHV NO? STUCK)) (11 s (PCIHV OPEN)) (12 S (BLOCK VAL\/E OFEN)) (13 S (BLCCK VALVE CLOSED)) (14 A (OPEN bLGCK VALVE)) (15 A (CLOSE BLOCK VALVE)) (16 SC (FRESSUHE VAPCH 2200 210C NEG)) (17 SC (CCOLANI 1N RCS NEC)) (18 S (PRESSURE VAPCR 2100 POS)) (19 s (PRCSLURC ShlTCh DFEN)) (20 S (NCj f&F IN SOLENOID)) (21 S (INClCAQOL< LIGhl OFF)) (22 S (SOLENOID PLUNGER IN)) (23 s (PUhV CLOSED))) (LINKS (1HRESh (1 2)) (S-CCUPLE (2 3)) (S-COUPLE (3 5) (4)) (5-CCUPLE (5 6)) (C-ENAbLE (5 7)) (C-CAUSL (7 9) (8)) (S-EQUIV (9 11) (10)) (S-EQUIV (11 16) (12)) (OS-CAUSE (14 12)) (GS-CAU5L (15 13)) (AN'IAG (12 13)) (S-E(lUIV (16 17)) (TEhESH (16 18)) (S-COIJPLL (18 19)) (ANTAG (3 19)) (S-CCUPLE (1s 20)) (S-CCUPLE (20 21)) (ANlAG (6 21)) (EcUlV (20 22) (8)) (EGbIV (22 23) (10)) (ANTAG (11 23))) (PURPOSE, (2 1E)) (NC!R(hAL (19 4 21 8 10 23 13)) (ThIGGEh (2))) Figure 1. CSA hepresentation of a Pilot Operated Relief Valve. abnormal events. SUCll actions include "reactor trikll, "safety injection", and containment isolation". ?hc control strategy first indexes into the CSA net to locate these events. There are only four types ot accidents to be diagnosed: loss of reactor coolant, loss of secondary coolant, steam generator tube rupture, and spurious actuation of safety injection. Hence a forbtiard cLain- ing control strategy is used. When a causal event is infered that has immediate effects that are observable but not verified, the control strategy asks the operator to verify these in order to further confirm the inference. An operator is sometimes faced with the task of interpreting observations that are seemingly contradictory. lhe CSA network model can be used to interpret the meaning of these observations and thus resolve many of the apparent conflicts. If conflicting observations are received, the conflict is noted, and the diagnosis is undetermined until additional obser- vations on the basis of the causal model can be obtained and additional inferences can be drawn from the model to resolve the conflict. SAMPLE CONSULTATION A sample consultation between an operator and tile consultant system is shown in Fig. 2. The operator asks for advice on abnormally low vapor pressure in tile coolant system. The consultant diagnoses the problem as loss of coolant (LOCA) , and predicts that the Emergency Core Cooling System (ECCS) will come on. When the operator reports that the ECCS is onI the consultant predicts that it will cutoff. When the operator reports rising water level in the pressurizer, the consultant suspects that the PORV is open. When the operator asks whether the ECCS can be turned off, the consultant says not, and explains tllat since coolant is being lost from the system, if the ECCS is turned off, the reactor core will eventually become uncovered. In the TM1 accident the operators could not decide that they were encounter- ing an LOCA. Instead they were confronted with what they thought was contradictory information. The pressure of the coolant system was low, indicating a loss of coolant, but the water in the pressurizer was rising, indicating (they thought) a surplus of coolant. Using the CSA model the system is able to generate pathways to the observed states and gives alternative explanations to the asserted and denied states. Thus the escape of steam from the top of the pressure vessel is determined to be an explanation of the rising water level in the pressurizer, so that rising water level in one part of the system does not contradict loss of coolant in the overall system. ]CSANb? TYPE EVENT F'OH WHiCH YOU WANT ADVICE. * (SC (PRESSURE VAPOR * * NEG)) (S (PRESSURE VAPOR 3)) * 1800 (S (TEhP CGNSTANT))? * YES (S (REACTOR TRIP))? * YES IMMLDIATE OPERATGR AClIONS CCMPLE'ILD? * YES DlAGNOSIS: LciSS OF COOLANT ACCIDENT PREDICT: (S-COUPLE (S (PRESSURE VAPOR 1600)) (S (ECCS ON))) SUGGEST: IMPLEMENT EMERGENCY PRCCEDURE ECR LOSS OF' COOLANT ACCIDENT * (S (ECCS ON)) (S (PRESSURE VAPOR ?)) * 1700 PREDICT: (THRESH (S (PRESSURE, VAPOR 2000)) (S (ECCS OFE))) * (SC (LEVEL COOLANT-IN PRLSSURIZER * * PCS)) (S (PURV OPEN))? * NO, (A (TURN OFF' ECCS))? NC * WHY? (S-CDUPLE ((SC (AM?, COOLANT 2000 1600 NEG)) (S (ECCS OEE))) (S (REACTOR-CGRL UNCOVERED))) * Eigure 2. Sample Consultation RECOMMENDED INTERVENTIONS Knowledge of the emergency procedures and possible interventions by the operators can also be represented by CSA networks. In some cases the CSA net representation of emergency procedures can be associated with the diagnostic category. In other cases the system may search the CSA model for the location of the specific source of the problem to determine the corrective intervention for the problem. Intervention may introduce factors that were not considered in the original diagnosis, so that it is appropriate t0 infer the effects of remedial actions prior to recommending them. History of other problems or progression of the diagnosed problem may be factors in selecting which alternative procedures to recommend. It may also be necessary to reason as to Why the procedures may not be correcting the problem. 304 RESULTS AND DISCUSSION REEEHENCES Expert system technology and com- monsense algorithms provide a useful framework for organizing knowledge of nuclear power plants. Tile current knocvledge base is a subset of the knobledge required fOK useful advice on actual plant operation. The current system consists of a 350 event CSA model of a PkR coolant system constructed by a nuclear engineering expert, a CSA network simulator for designing and testing the models, and a diagnostic program that uses a forwaKd chaining control strategy. Of the various expert systems this prototype most closely resembles CASNE'I or EXPERl r41. It differs from those systems primarily in the use of CSA nets, the integration of the diagnostic rules into the CSA nets, and in its simulation capability. Data on frequency of componetlt failure OK abnormal events can be included in the model to aid in determining certainty of competing hypotheses. Since some of the "facts" that the operatorti report are inferences or beliefs they may be asked to associate a certainty factor with their assertions. The models of plant operation, event diagnosis and plant intervention Gvill be experimentally validated using tile methodology used to test I%YCIN [6]. This involves compa r i ng the results Of the computer-based consultant aith results obtained from nuclear power plant operators and nuclear engineers with vary- ing levels of experience presented with the same case studies. Independent human experts in nuclear plant operation would then evaluate the results and rank the computer-based consultant at the appropriate level of expertise. [l] Kemeny, J. G., et al. The Report of the President's Comniissio~ - the -. Accident at Three Flile Island.- Oct. 1979. - - - ~ l21 Rieger, C. ana k. Grinberg, "Tiht Causal Representation and Simulation of Physical Flechanisrns ," lechnical Report TR-495, Department of Computer Science, The University of Naryland, College Park,llD, 1976. [3] Rogovin, Ivi. and G.'l. E‘rampton, Jr., Three p!ile Island : A Report to the Wssioners ---- and the Public, U.S. - __ _ .- Nuclear Regulatory Commission. hashington, D.C., 1980. [4 ] heiss, S. "EXPERT: k . and C. A. A System for Kulikobski, Developing Consultation Plodels," In Proc. IJCAI- 79. Tokyo, Japan, August, 1979, 3X2-947. [5] kells, A. ti. and h. E. Underwood, "Knowledge Structures fOK a Nuclear. Poctier Plant Consultant," Annual Pice t- ing of the American Nuclear Society, Los Angeles, June, 1982. [6] Yu, V. L., Performance et al, W Eva1 ua tiny the Of a Con1 PU ter-Eased Consultant," ComputeL Programs in Biomedicine. 9:l (1979) 95-102. - 305
1982
15
107
Information Acquisition in Diagnosis Rnmcsh S. Patil, Pctcr Szolovits Massachusetts Institute of ‘I’cchnology and 1 .aboratory for Computer Science 545 Technology Square Cambridge, Massachusetts 02139 William II. Schwartz, M.D., Dept. of Mcdicinc, ‘l’ufts University School of Medicine, and the hlcdical Scrvicc. New England Medical Center, Boston, Massachusetts Abstract WC argue that the question selection proccsscs used in the existing AZ in Afct-licine programs arc inadequate. WC trace thcsc inadcquacics to their use of purely surface lcvcl models of discasc and to the lack of planning in sequencing their inquiries. WC then describe the information gathering component of AlEI, (a program for Acid-Base and Hcctrolyte disturbances) with emphasis on its USC of a causal model of a patient’s illness in guiding its diagnostic inquiries. 1. Background Diagnostic programs built using the .4/ in Mcriicinc (AIM) nlti;hodoiogy typically consist of two parts: one IU furm hypo~l~cs~s that explain the data known to the program. and one to acquire new information from the program’s users to clarify difficulties remaining with the hypotheses under consideration. In this paper WC suggest that the questioning behavior of most existing diagnostic programs is inadequate because it is not very rcsponsivc to the totality of the progri~l~‘s hypotheses. IIotb rule-based systems such as Mycin IShortliffc761 and frame-based systems such as lntcrnist-I [Poplc77] and PIP [Paukcr76] suffer from this dcfcct, though for diverse reasons. Mycin [Shortliffe76] is a rule-based expert program for the diagnosis and therapy of infectious discascs. It uses a backward chaming control structure to scqucncc through its rule-base in search of information rclcvant in the diagnosis of possible infections. Mycin always pursues the antecedent clauses of a rule in their order of appearance in the rule. ‘I’hus during dny cxcoution of ihc program the antecedent clauses of every rule are expanded in exactly the same order, resulting in a pre-dctcrmincd vrdcr in which the questions may be asked.” This rigidity in Mycin results from its USC of rules to encode both its medical knowledge and the control component of expert reasoning. ‘l’hc resulting limitations of the Mycin system were recognized by Davis [Davis76], who proposed mcra-rules as a means of separating thcsc two aspects of cxpcrt bchabior.’ Meta-rules are used in his system, ‘I‘eircsias [l>avis76], to encode strategic knowledge for the control of the backward chaining rule interpreter by dynamically re-ordering and selecting rules to be pursued. It is important to note that the meta-rules do not alter the program’s domain knowledge; they provide additional knowlcdgc that allows it to make more efficient use of this knowledge. l. 7l-k research MS supported (in p:uz) by the National lnslitutes of Health Grant No. 1 PO1 LM 03374-03 from the National Library of Medicine. Dr. Schwartz’s mearch was also supponed by the Jt&ert Wood Johnson Foundation, Princeton, New Jersey. 2. Mycin’s reasoning scheme chooses not to ask a question only if the rule whose antecedent part suggests it is already doomed to fail by awwers to previous qmslions or if the question b~c been prevIousl\r answered because jt appeared in Lhe condition of another rule. Internist-l, a diagnostic program being dcvclopcd for all of internal mcdicinc [Poplc77]. and the Present lllncss Programs (HI’), a program for the initial formulation of a diagnosis for paticms with discascs of the kidneys [l%ukcr76], both focus on a problem to investigate next by considering all their active hypotheses. ‘l’hcrcforc, control over which discasc or disease set is next to be explored dots not dcpcnd on the manner or order in which thcsc arc cxprcssed in the program. Howcvcr, once the controlling hypothesis set is dctcrmincd, the information gathering stratcgics arc limited to selecting one prc-packaged group of questions at a time. E.g., each time the program chooses to try to con firm some particular hypothesis, it will run through the same list of associated questions. in the order in which they appear in the p~ogram’s st,ltic knowlcdgc base without rcgLuCi LO what clsc the program knows. After a group of questions is asked, the program’s hypothesis formation module takes control again, and whcthcr this line of questioning is rcsumcd or abandoned dcpcnds on the process of hypothesis evaluation and whcthcr the same hypothesis set again rises to the top of the program’s interest. Often too little control is thus cxcrclscd’ by the information acquisition part of the program, and overall diagnostic incfficicncies and incohcrcnt question scqucnccs rcsul t. To study the value and USC of multi-lcvcl causal descriptions in diagnostic and thcrapcutic reasoning, WC have been dc\cloping a new medical expert consultatiGn program for the donliiin of acid-base and clcctrolytc disorders, called Al3f:I.. ‘l’hc information-gnthcring component of ABEL uses the program’s causal model of what it bc:icvcs might bc wrong tiith the patient to help sclcct and older possible questions to be asked (thus capturing the advantages of mcta-rules used for this purpose in Mycin), and unlike Internist-I or PIP it constructs a rather dctailcd information gathering plan tailored to the specific set of hypothcscs it must confirm or diffcrcntiatc among. ‘I’his approach gives A1~1’L an improved control over the scqucncing of questions that are cvcntually asked of the user. WC have noted clsewhcre patil81] that most of the medical knowledge contained in the first gcncration AIM programs can be characterized as phcnomcnolpgical: it dcscribcs the associations among phcnomcna without the causal mechanisms underlying the observed associations. ‘I’hcreforc. unlike expert clinicians, thcsc programs are unable to recognize and use causal and temporal relationships among diseases and to reason about the disease mechanisms at various levels of dct3il. They are unable 10 cvaluatc the interactions, commonalities and differences among diseases, cxccpt perhaps in terms of some simple measure of relative likelihood. ABEI- is intended, by contrast, to provide multiple levels of description of its hypothcscs, allowing it to use both the phenomenological associations of a shallow lcbcl and the dcraiicd causal mechanisms of deeper lc~els in formulating hypotheses and in planning the gathering of new information. From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. In a previous paper patil8la] we have prcscntcd ABEL’s mechanism for describing a patient’s illness. Called the parim/-specific mo&l this description includes data about the patient as well as the program’s hypothetical intcrprctations of thcsc data in a multi-level causal network. In this paper we focus on the diagnostic information-gathering process of the ABEL program which complcmcnts the patient-specific model. 2. The information acquisiton problem diarrhea l2”Se.S ) lower-Gi-loss ‘l‘hc overall diagnostic activity is conccrncd with developing a clear understanding of the discasc proccss(cs) rcsponsiblc for the patient’s illness. for USC in prognosis and thcrnpy. It involves intcrprcting av;ril,iblc information about ~11~ p&nt to form diagnoslic hypotl~cscs which might explain the patient’s illness and acquiring new information that M ill help discriminate among these hypothcscs. ‘I’hc information acquisition activity must take into account factors such as the cost, cfficicncy and urgency of acquiring information. In addition, as it interacts with the user, it must also take into account factors such as topical cohcrcncc and Lhc possibility of errors. / cau- low urinary sodium excretion pressure J slight~c~~;mo’arity poor tissue turgor serum creatinine Fig. 1. In one explanation of the example case, diarrhea is lower gastro-intestinal losses, which result in metabolic-acidosis, volume dcplction, and their conscqucnccs. Previous programs c.g.. Internist-I [poplc77] have employed stratcgics such as discrimirzak pursrrc and r~-uk~-ou! to help sclcct a question or group of questions to ask next. ABFJ. LISCS such strategies instead to dccomposc the overall diagnostic problem of resolving unccrtaintics in the hypothcscs Into sub-goals each of which is more specific and amenable to further decomposition. Thus, rather than gcncrating questions directly from the top-lcvcl diagnostic problem, ABl:L gclicratcs a tree: of hub-pldblcms the lcavcs of which can be solved by asking OIIC or a small group of questions. Bcca~2sc the whole set of thcsc questions is methodically sclcctcd bcforc any question is actually asked. intcrdcpcndcncics among the questions can be taken info consideration. For cxamplc, ABI~L currently organizes all the planned questions by the organ system, etiology, or time period on which they conccntratc. so that groups of questions arc asked in a cohcrcnt order. ‘I‘hc number of places in the sub-goal tree where the answer to a question is thought to bc useful is also used to efficiently order the sequence of questions. Erroneous information is bound to be presented to the program during a diagnostic session. Therefore, an ability to identify questionable information and to challcngc and correct it quickly is an important ingrcdicnt of clinical cxpcrtisc. When prcscnted with a finding which, if accepted, may require extensive reformulation of the currently held diagnoses, it may be unwise for the program to act on such information unless it can bc substantially corroborated and its validity as a diagnostic sign checked out. For example, upon unexpectedly (and erroneously) finding “a substantial weight increase” in a patient over a short period of time, the program should check if the two weights were taken on the same scale. It is unreasonable, however, to ask the same question cvcry time a weight change is rcportcd for a patient. In ABM, we associate with each information gathering goal expectations about its possible outcomes. After each question is asked, if the answer is consistent with cxpcctations, questioning continues with the planned scqucncc. Howcvcr, if the answer contradicts the cxpcctations, this information is considcrcd questionable and an excuse-fin&jing mechanism is activated. This allows AlWL to pursue qucstionablc findings fiu-thcr bcforc accepting them. In the rcmaindcr of the paper we will dcscribc the workings of the information gathering activity of ABEL with the help of an example. 3. An Exanrple I-ct us consider a patient who has been ill for 3 to 4 days and is known to have modcrntely severe metabolic-acidosis and slight hyponatrcmia (strum Na of 128 mcq/L). Let us also assume that no additional history is available. Two possible flmnulations of the patient’s problem arc shown in Figs. 1 and 2. One hypothesis states that the underlying disorder is dlarrhca, the other. that it is acute renal failure. The tauaes ) metabolic.acidosis acute Ca”SeS acid causes renal failure retention z- metabolic-acidosis k&emia / sediment \ sodium excretion fluid-retentl -A lX”SeS increased serum creatinine edema hyponatremia Fig. 2. An alternative explanation to that of Fig. 1 holds XutC renal failure rcsponsiblc for acid retention, which causes metabolic-acidosis and its conscqucnccs. Goal 1: Differentiate diarrhea, acute renal failure Context: Cause-of metabolic acidosis Expectations: Possible: diarrhea Severity: moderate Possible: acute renal failure Severity: moderate Duration: a few days Subgoals: (AND 2 3 8 9 10) Fig. 3. Top-lcvcl goal structure for discriminating bctwccn diarrhea and acute renal failure. program has set as its top lcvcl goal, as shown in Fig. 3. the desire to discriminate bctwccn thcsc two possible intcrprctations. To accomplish this objective the program compares the two intcrprctations, identifying the diffcrcnccs bctwcen the states prcdictcd by the two interpretations, and formulates sub-goals to pursue each difference. For example, the program identifies urinhry sodium concentration as a useful diffcrcntiator between diarrhea and acute renal failure, because diarrhea predicts that the urinary sodium concentration will be low WliCtUS Wild1 failure predicts a rclativcly high urinary sodium concentration. Similarly, the program can differentiate between diarrhea and acute renal failure by determining the state of hydration of the patient; this goal can be achicvcd by confirming either volume depletion or cdcma. Diarrhea predicts the loss of fluid and therefore volume dcplction. Volume depletion, however, can not be directly observed: therefore, the program further dccomposcs this goal into sub-goals for confirming poor tissue turgor, low blood pressure and a slightly elevated serum creatininc concentration. Acute renal failure, on the other hand, predicts the accumulation of body fluids if normal intake of fluid has continued during the period of oliguria. If a sufficient accumulation occurs, it will manifest itself as edema. A graphic rcprescntation of the complctc goal structure is shown in Fig. 4, and the goals are listed below it. ‘I-he program has now completed the generation of the sub-goal tree for differentiating between diarrhea and acute renal failure. It can group and order the questions by efficiency and stylistic measures. For example, it groups questions about the Goal 7: explore blood pressure Context: explore state of hydration Expectations: Possible: slightly low Cause: volume depletion Possible: normal Cause: fluid retention Goal 5: confirm hemoglobin and tubular cell c,asts in urine Context: differentiate diarrhea, acute renal failure Expeclations: Possible: present Cause: acule renal failure Possible: absent Cause: diarrhea Goal 6 Goal 7 Goal 2: explore urinary sodium concentration Context: differentiate diarrhea, acute renal failure Expectations: Possible: low (less than 10 meq/l) Cause: diarrhea Possible: high (greater than 40 meq/l) Cause: acute renal failure Goal 3: explore state of hydration Context: differentiate diarrhea, acute renal failure Expectations: Possible: volume depletion Cause: diarrhea Severity: moderate Possible: fluid retention Cause: acute renal failure Default: continued normal fluid intake Severity: mild to moderate Subgoals: (XOR 4 5) Goal 4: confirm fluid retention Context: caused by acute renal failure Expectalions: Possible: edema Severity: mild to moderate Possible: no edema Goal 5: confirm volume depletion Context: caused by diarrhea Expectalions: Possible: present Severity: mild to moderate Possible: absent Subgoals: (AND 6 7) Goal 6: conform poor tissue turgor Context: explore state of hydration Expectations: Possible: mild Cause: volume depletion Possible: absent Cause: fluid retention Goal 9: explore serum creatinine Contest: differentiate diarrhea, acute renal failure Expectations: Possib!e: slightly to moderately elevated Cause: acute renal failure Possible: slighrly elevated Cause: diarrhea Goal 10: exp!ore serum K Context: differentiaLe diarrhea, acute renal failure Expectations: Possible: increased Cause: acute renal failure Possible: low Cause: diarrhea Fig. 4. 17~~ complctc goal structure. ANI> indicates goals that must bc simultaneously nchicvcd and XOR indic;Itcs goals of uhich only one’ cdn bc achieved. The solld arcs in the goal structure rcprcsent the path actually being taken by the information-gatherer. results of urine analysis (namely urinary sodium concentration and urine scdimcnt) to ask first, general physical questions relating to the stat? of hydrittion (i.c., edema, tissue turgor and blood prcssurc) next, and strum crcatininc and scruln K concentrations ht. ‘[‘he foiiowing is a summary of the infbnnation gathcrcd bj the program in pursuing this goal structure. Urinary sodium concentration: 50 meq/l Urine sediment: negative Edema: absent Tissue iurgor: slightly reduced Blood pressure: normal Serum Creatininc concentration: 2.5 mg per cent Serum K con~cntralion: 3.5 meq/l After successfully achieving citch of the sub-goals, the program rcconsidcrs the top-lcvcl go:il of diffcrcrttinting bctwccn diarrhea and crcutc rcnsl f%lurc. ‘I‘hc progrnm rcali/cs th;it the overall set of findings arc not consistent with one ;mothcr. In p;\:ticul;tr, the finding of high urinary sodium concentration suggests acute renal failure, whcrcas the low strum K concentration is inconsistent with acute renal failure. On the other htrnd. low serum K concentration is consistent with diarrhea, whcrcas high urinary sodium concentration is not, ‘l‘his conflict nctivatcs the program’s cxcusc mechanism (SW below). ‘1‘0 resolve the contradiction, the program sets up two goals so that one of the two diagnoses can bc confirmed. ‘I‘hc two goals arc shown in Fig. 5. Upon pursuing thcsc goals the program finds that the patient has been vomiting. which explains the slightly low strum K concentration and a slight volume dcplction. ‘I’hc high urinary sodium cxcrction argues strongly in favor of acute renal failwe. Goal 11: Find excuse for high urinary sodium concentration context: diarrhea expectation: possible: diuretic use possible: Addison’s disease Goal 12: Find excuse for low serum k context: acute renal failure expectation: possible: diuretic use possible: vomiting Fig. 5. Additional goals created to resolve apparent ctinrradiction in input data. After completing a cycle of information gathering the program cntcrs this information into the patient-specific models, revises its hypothcscs for the patient’s illness and continues the diagnosis. 4. Summary and Conclusion In this paper WC have briefly skctchcd the information acquisition mctliod used in the ABEI- program. WC have proposed a novel way of viewing the traditional diagnostic strategies as decomposition operators. We have then used thcsc strategies to decompose an overall diagnostic goal into a tree of sub-goals the leave5 of which can bc solved directly by askmg questions. In addition WC have proposed attaching to each diagnostic goal the expectations of the program about its possible outcomes. Hccause all the questions useful to the overall diagnostic goal arc methodically gcncratcd before any question is asked, the program can order these questions for cff%zicncy and coherence. Uccausc the possible outcomes of pursuing each goal arc determined before questions relating to it are asked, the program can easily determine the success or failure of its goals and can identify questionable information that contradicts all possible cxpcctations. This allows the program to make cfficicnt use of its case-specific knowledge and exert better control in its information acquisition activity. 5. References [Davis761 Davis, R., Applications of Meta Level Knowledge to the Construction, Maintenance and Use of Large Knowledge Bases. Ph.D. thesis, Stanford ArLificial Intelligence Laboratory. Technical Memo 283 (1976). [Pati Patil. R. S., Szolovits, P.. and Schwartz, W. B., Causal Understanding of Patient Illness in Medical Diagnosis, Proceedings of rhe Sewruh International Joint Conference on Art$ciaI Intelligence, available from American Association for Artificial Intelligence, 445 Burgess Drive, Menlo Park, Ca 94025 (1981). [Patil8la] Patil. R. S., Crlnsal Representation of Paticn: Illr?ess for Electrolyte and Acid-Base Diagnosis. Ph.D. Thesis. Laboratory for Computer Science, Massachusetts Institute of Technology. Technical Report MlT/LCS/TR-267 (1981). [Pauker76] Pauker, S. G., Gorry. G. A.. and Kassirer. J. P., and Schwartz, W. B., Toward the Simulation of Clinical Cognition: Taking a Present Illness by Computer, Tfle American Journal of Medicine 60:981-995 (1976). [Pople77] Pople. H. E., Jr.. The Formation of Composite Hypotheses in Diagnostic Problem Solving: an Exercise in Synthetic Reasoning, Proceedings of fhe F(fih Internatiomil Joint Conference on Artificial Intelligence. available from the Department of Computer Science, Carnegie-Mellon University, Piltsburgh. PA 15213 (1977). [ShorUiffe76] Shortliffe. E. H., Computer Based hrledicul Consultations: MYC/N, Elsevier North Holland Inc. (1976). 348
1982
16
108
DEEP VERSUS COMPILED KNOWLEDGE APPROACHES TO DIAGNOSTIC PROBLEM-SOLVING B. Chandrasekaran and Sanjay Mittal Dept. of Computer and Information Science The Ohio State University Columbus, Ohio 43210 ABSTRACT In this paper we argue that given a body of underlying knowledge that is relevant to diagnostic reasoning in a medical domain, it is possible to create a diagnostic structure which has all the relevant aspects of the underlying knowledge "compiled" into it in such a way that all the diagnostic problems in its scope can be solved efficiently, without, generally speaking, any need to access the underlying structures. We indicate what such a diagnostic structure might look like by reference to our medical diagnostic system MDX. We also analyze the role of these knowledge structures in providing explanations of diagnostic reasoning. I INTRODUCTION Recently Hart 111 and Michie [2] have written about the "depth" at which knowledge is represented and used in problem-solving by expert systems. Hart makes a distinction between "deep" and "surface" systems, while Michie characterizes a similar distinction by reference to "high road" vs "low road" approaches. The underlying idea is that surface systems are at best a data base of pattern-decision pairs, with perhaps a simple control structure to navigate through the data base. There is less agreement on exactly what characterizes deep systems, but it is suggested that deep systems will solve problems of significantly greater complexity than surface systems can. This distinction appears to capture a fairly widespread feeling about the inadequacy of a variety of first generation expert systems. In the area of medical diagnosis -- which will be the exclusive concern of this paper, even though the spirit of what we say may be applicable to other tasks as well - the straightforward approach of building a data base of patterns relating data and diagnostic states is not feasible given the large number of patterns that would be needed in any realistic medical domain. There is also the pragmatic problem of coming up with a complete set of such patterns in the first place. The next best approach, namely, devising some problem-solving mechanism which operates on a data base of partial patterns (i.e., patterns relating only a small set of data to diagnostic states; also called situation-action rules) has been tried with moderate success 13, 41. Hart [ll and Patil [5] have raised many valid concerns about these "compiled-knowledge" approaches. The major intuition behind the feeling that expert systems should have deep models is the observation that often even human experts resort to "first principles" when confronted with an especially knotty problem. Also there is the empirical observation that a human expert who cannot explain the basis of his reasoning by appropriate reference to the deeper principles of his field will have credibility problems, especially in life and death areas such as medicine. Added to this is the often unspoken assumption that the speed and efficiency with which an expert solves problems can be accounted for by hypothesizing that the physician uses a data base of commonly occurring patterns similar to that described earlier for quick problem solving in most cases. In this view this data base is no longer adequate in hard cases, and invocation of deeper structures is called for. The above intuitions have resulted in calls for the representation and manipulation of deeper knowledge structures in expert system design. There is, however, no general agreement on the form and content of these deeper structures -- Hart suggests that they should model causality, while Michie, following Rouse [6l, proposes that they should represent knowledge of the form "situation x action -> situation". The work of Patil I51 and Pople [71 is based on the idea that the appropriate form for the representation of this kind of knowledge is a causal net. There is an associated issue of the nature of problem solving mechanisms which can operate on these knowledge structures and produce solutions to problems. The investigations of Patil and Pople referenced earlier attempt to answer this question. The thesis of this paper is as follows. Between the extremes of a data base of patterns on one hand and representations of deep knowledge (in whatever form) on the other, there exists a knowledge and problem solving structure, which (1) has all the relevant deep knowledge "compiled" into it in such a way that it can handle all the diagnostic problems that the deep knowledge is supposed to handle if it is explicitly represented and reasoned with ; and (2) will solve the diagnostic problems more efficiently; but (3) it cannot solve other types of problems-- i.e., problems which are not From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. diagnostic in nature -- that the deep knowledge structure potentially could handle. For the past several years, we have been developing a medical diagnosis system called MDX [81 191. This system embodies the above thesis. The knowledge structure of MDX has compiled in it all the relevant aspects of deep knowledge available to our human experts. Before we argue for the adequacy of such a compiled structure, it will be useful to give a brief characterization of diagnostic problems and how MDX solves them. 11 -- MDX AND THE DIAGNOSTIC TASK By the term “diagnostic task,” we mean something very specific: the identification of a case description with a node in a diagnostic hierarchy. For the purpose of pre-determined current discussion let us assume that all the data that can be obtained are already there, i.e., the additional problem of launching exploratory procedures such as ordering new tests etc. does not exist. The following brief account is a summary of the more detail ed account 191 of diagnostic problem so Iv ing . Let us imagine that corresponding to each node of the classifiation hierarchy alluded to earlier we ident if Ya “concept ” More general class if icatory concept s are higher in the structure, wh ile more particular one s are lower in the hierarchy. The total diagnost ic knowledge is then distributed given in through the conceptual nodes of the hierarchy in a specific manner to be discussed shortly. In the medical domain, a fragment of such a hierarchy might be: INTERNIST LIVER HEART HEPATITIS CHOLESTASIS Fig. 1 Example Diagnostic Structure The problem-solving for this task will be performed top down, i.e., the top-most concept will first get control of the case, then control will pass to an appropr iate succes sor concept, and so on. In this case, INTERNIST first establ ishes that there is in fact a disease, then LIVER establishes that the case at hand involves some liver disease, while say HEART etc. reject the case as not being in their domain. After this, CHOLESTASIS may establish itself and so on. Each of the concents in the classification . hierarchy has “how-to” knowledge in it in the form of a collection of diagnostic rules. These rules are of the form: <symptoms> ---> <concept in hierarchy>, e.g., “If high SGOT, add n units of evidence in favor of cholestasis.” Because of the fact that when a concept rules itself out from relevance to a case, all its successors also get ruled out, large portions of the diagnostic knowledge structure never get exercised. On the other hand, when a concept is properly invoked, a small, highly relevant set of rules comes into play. The problem-solving regime that is implicit in the structure can be characterized as an “establish-refine” type. That is, each concept first tries to establish or refine itself. If it succeeds in establishing itself, then the refinement process consists of checking which of its successors can establish themselves. Each concept has several clusters of rules: confirmatory rules, exclusionary rules, and perhaps some recommendation rules. The evidence for confirmation and exclusion can be suitably weighted and combined to arrive at a conclusion to establish, reject, or suspend it. The last mentioned situation may arise if there is insufficient data to make a decision. Recommendation rules are further optimization devices to reduce the work of the subconcepts. Further discussion of this latter type of rules is beyond the scope of this paper. The concepts in the hierarchy are clearly not a static collection of knowledge. They are active in problem-solving. They also have knowledge only about establishing and rejecting the relevance of that conceptual entity. Thus, they may be termed “specialists”, in particular, “diagnostic specialists .‘I The above account of diagnostic problem-solving is quite incomplete. We have not indicated how multiple diseases can be handled within the framework above, in particular when a patient has a disease secondary to another disease. A more powerful model is outlined in [91. III EXAMPLE ANALYSIS Reasoning with Deep Knowledge Before we get into the example proper, the following background will be useful. (In this entire discussion, the interests of clarity have overridden the interests of completeness and medical accuracy. Much of the discussion is simplified to make the essential technical points.) The MDX system diagnoses in a syndrome called Cholestasis, which is a condition caused when the secretion of bile from the liver or its flow to the duodenum is blocked. Such a blockage can be caused by any of a number of causes. MDX attempts to pinpoint the cause in a given case. A subset of causes can be grouped under the category ‘Extrahepatic Obstruction,” i.e., blockage due to an obstruction of bile flow outside the liver. In this example, we will assume that the physician has established that a cholestatic condition is present, and he is examining whether the cause is extra-hepat ic , in particular which of a number of possible extra-hepatic causes may be the underlying reason. Bile flows from the liver into the duodenum via the bile duct. The following is a sequence of explicit reasoning steps by a (hypothetical) physician. 350 1. The bile duct is a flexible and somewhat elastic tube. If such a tube has a blockage at some point and if there is fluid pressure building up, then it will be dilated on the “upstream” side of the blockage. obstruction, Thus if there is an extra-hepatic the biliary tree inside the liver and a portion outside should be dilated. This should be visible in various imaging procedures as a specific visual pattern, call it <pattern l>. Thus, EHO causes (pattern l> in X-ray of the region. 2. Given a flexible duct, obstruction can be caused because there is a physical object in the duct, a contraction such as a stricture is present, some object outside the duct is pressing on the duct, or the internal diameter of the duct is reduced for some reason. Physical objects that I can think of in this context are biliary stone and a tumor in the duct. region, Looking at the anatomy of the nearby organs are gall bladder and pancreas. Cancers in these organs can press on the ducts and cause obstruction. 3. Biliary stones show up as a characteristic pattern (call it <pattern 2>) in a cholangiogram. Since the stones cause obstruction and an increase in the peristaltic action of the duct, they can cause acute colicky abdominal pain. 4. Cholangitis (inflammation of the bile duct) can cause swelling and reduce the duct diameter, and bile flow. A stricture can be caused by the surgical wound during a prior b.duct surgery not healing properly. . . . . . . . . . . . . . . . B. Knowledge & Different Levels of Depth Used -- In para 1 above, very general knowledge about flexible pipes as well as knowledge about anatomy of the region and about imaging procedures is accessed, and from all these, a highly specific rule is compiled relating a diagnostic state to certain X-ray patterns. In para 2, again very general knowledge about flexible ducts is used in conjunction with an anatomical model of the region and other causal knowledge to generate a list of possible causes. procedures, In para 3 knowledge about imaging and about the physical properties of stones and ducts is used to infer certain YP consequences of the diagnostic state being present. -- Clearly, a variety of knowledge structures are used in the above reasoning fragment, many of them surely deserving the name “deep” or “underlying” knowledge. After all, what is deeper than very general knowledge about flexible ducts? Note also that not all these pieces of knowledge are at the same level : some had to do with flexible ducts, while others were highly domain-dependent pieces such as knowledge about cholangiograms. In this particular instance the physician reasoned about the relationship between colicky abdominal pain and stones from more basic pieces of knowledge about peristaltic ducts. A more experienced physician may simply use a piece of knowledge, “biliary stones --> colicky abdominal pain.” Thus any such reasoning will be a mixture of such compiled pieces and more general items of knowledge. c. Diapnostic Structure Now we shall attempt to show that we can create a diagnostic structure as a result of the above reasoning. Once this structure is available, most of the steps in the above reasoning can be skipped. Consider the hierarchy of specialists in Fig. 2 on the next page. When a case is established to be cholestatic, control will pass to EHO in the above figure. If EHO is able to establish itself ( i.e., if intrahepatic and a portion of the extrahepatic ducts are dilated - see establish rule in figure for EHO), it will call its successors (in parallel, for purposes of current discussion). Each successor will similarly establish or reject itself by using appropriate rules. Typically only one of them will be established at each level. The one(s) that are established refine the hypothesis by calling their succesors. The tip nodes that are established provide the most detailed diagnostic nodes for the case. (Again we caution that this portion of D is for illustration only; the reality is much more complicated.) If one were to denote by U (for underlying knowledge) all the knowledge structures that were accessed during the physician’s reasoning described above, and by D the knowledge in the above diagnostic structure (fig. 21, then the claim is that all the knowledge in U that plays a role in diagnosis is in D in a form that is directly usable. If D fails to solve a problem, a resort to U will not improve the situation. (An exception, which does not counter the basic argument, but may have some implementation consequences is discussed in point (5) of Sec. IV.) D. Relation Between D and U Structures ---- In structure D of fig. 2, several diagnostic states are identified and organized hierarchically. (The justification for this particular hierarchy of diagnostic states as opposed to other alternatives -- e.g., one might have chosen to place biliary duct tumor and biliary stone as children of a node called physical obstruction, rather than group b. duct tumor with gall bladder cancer as we have done -- cannot be provided by refernce only to knowledge in U. Meta-criteria such as therapeutic implications or commonality of knowledge needed for establishing the states come into play. For example, many of the manifestations of cancer are the same whether it is b. duct cancer or GB cancer; hence the choice presented. While further discussion of this issue is beyond the scope of this paper, the point to be noted is that D is organized for efficiency in the diagnostic process, a consideration which may not be directly available from U.) The complex reasoning in para 1 based on anatomy and knowledge about ducts is simply compiled into the establishment rule in the EHO specialist. Para 2 resulted in candidates for further specializations of the hypothesis about the cause of cholestasis. Para 3 resulted in the procedure for establishing biliary stone as cause of cholestasis. The causal reasoning in para 4 resulted in a similar procedure for stricture. 351 Extra-Hepatic Obstruction (~~01 Bil. Stone Inflammation (Cholangitis) cancer cancer Cancer Infectious Sclerosing EH Obstruction To establish: (from para 1) Dilation of intrahepatic ducts and portion of extrahepatic duct in X-ray Bil. Stone To establish : (from para 3) <pattern 2> in cholangiogram; colicky abdominal pain, etc -e IV B ADEQUACY OF D-STRUCTURES FOR DIAGNOSIS - Let us briefly look at some of the ways in which D may fail to solve a case. (1) A concept is missing in D because a needed chunk of knowledge is missing in U. E.g., suppose in a particular case EHO is established, and evidence of inflammation is present. Suppose further that cholangitis is ruled out. D will be able to provide only an incomplete solution to the problem. It turns out that there is another inflammation possibility in that region that can also cause obstruction : pancreatitis. This piece of knowledge was missing in U. This resulted in D missing a concept, a sibling of Cholangitis. Thus the problem can be traced to an inadequacy of U. An automated diagnostic system endowed with the same U will gain no further ability to solve this case by referring to U, deeper though it is. (2) Some establish/reject knowledge is missing in a specialist. Again such a deficiency can be traced to missing chunk of knowledge in U. (3) D's problem solving strategy is not powerful enough. (As indicated in Sec. II the problem-solving strategy outlined there is inadequate, and a more powerful strategy is given in [91.) In this case a reference to U still will not be able to do the trick. The weakness of the PS strategy of D means that the system designer has not fully comprehended the use of knowledge for diagnostic problem solving.Thus access to U will not help, since the issue in question is precisely how to properly use the knowledge. A system which knows how to use the knowledge in U can equally be rewritten to embed that improved use of knowledge in the problem solving strategy of D. For this reason the arguments for compilation are not strongly dependent on the correctness of the MDX approach in detail. (4) The knowledge is there in U, D's PS strategy is powerful enough, but the knowledge was improperly compiled in D. This in fact often happens. But again access to U during the problem Stricture To establish : (from para 4) Evidence of prior biliary tree surgery; (pattern 3> in X-ray (Similar boxes for other nodes.) Fig. 2 Diagnostic Structure for Extrahepatic Obstruction - - - -- - -- solving by D will be useless, since the ability to use the knowledge in U effectively for diagnosis implies the ability to identify the proper use of that knowledge for compilation into D. In fact we feel the problem of automatic generation of D from U is a much harder research problem. One can view the research of Patil 151 and Pople [7] as attempts to understand the role of knowledge in U for diagnosis. If this type of research is successful, then we can provide an automatic mechanism for going from U to D. (5) U has the relevant knowledge, but compiling all the relevant ways in which that knowledge can come into play in diagnosis will result in a combinatorial problem in representing them in D. A specific example will motivate this situation. In a CPC case invloving EHO that MDX was presented with, the system established "inflammation" (see fig. 21, but could not establish any of its successors. It turned our that in an unusual sequence of events a pice of fecal matter from the intestine had entered the bile duct through the Sphincter of Oddi, and was physically obstructing bile flow. (This possibility escaped not only MDX, but all of the physicians who analyzed the case in the CPC. It was discovered during surgery.) One could suppose that in theory all such possibilities could be compiled, but, even if it could, the number of possibilities in which some piece of matter from one part of the body could end up through some fistula say in the bile duct is quite large. One way out in this kind of situation is for D to simply have a procedure which calls the underlying knowledge structure U. But notice that all the possibilities that were not compiled in D because they would be too numerous will nevertheless have to be generated one by one at this point. Thus in principle it is simply a run-time compilation of the needed portion of D. In fact precisely because of the combinatorial nature of this situation, all the physicians, endowed with quite a powerful U-structure, failed to think of the real cause. D may include a data base of patterns for further with time with new discoveries in medicine, most of -m-p- efficiency. Some of the major specialists such as us as patients would like to assume that our EHO may contain a small set of patterns which can physicians are capable of translating the relevant be used to dispose of some common possibilities changes in U to appropriate changes in D. An without using the full PS capabilities of D. For ability to explain by calling forth appropriate example, assume for the sake of the argument that fragments of U is our assurance that the expertise EHO due to b.stone accounts for 90% of all EHO that the physician is using to diagnose our illness cases, and that due to stricture for a further 5%. is built on the basis of appropriate learning (This happens not to be true, thus we reemphasize processes. Thus a system such as D when it answers that this is purely for making a point.) Now, the Ql is giving evidence of the correctness of its own EHO specialist may, after establishing itself, problem solving, while its answer to 42 is evidence access a small data base of patterns that may help of the correctness of those human experts who to establish or dispose off these possibilities provided the underlying knowledge structures and with a small effort. Thus the system can solve 95% the compilation process. of the cases with extreme efficiency. But the important point is that if this data base of VI CONCLUDING REMARKS patterns is exhausted before problem solution, the rest of D is still available for more thorough Almost all of the discussion in the paper so far problem solving. has dealt with the generic task of diagnosis. We have argued elsewhere [lOI that there exist other generic tasks, each with a particular way of using V PROVIDING EXPLANATIONS knowledge for problem solving. Corresponding to each such task in a domain, one may construct a The idea that deeper knowledge structures are compiled problem solving structure. In 1101 we needed for providing explanations has been proposed have identified a few other generic tasks that we of ten. It is perhaps worth examining the concept have encountered in the analysis of medical of explanation itself. Referring to the structure reasoning. While most of the arguments in this D in fig. 2, suppose the diagnostic conclusion in a paper do not depend upon the correctness of the particular case was “EHO due to b.stone.” Suppose details of the MUX approach to diagnostic problem we ask: solving, they do implicitly depend upon the existence of diagnosis as a generic task. This is Ql . Why do you conclude that this patient has what makes it possible to compile the underlying EHO due to b.stone? knowledge in a structure such as D. In our view, identification of further such generic tasks and The system at that point may refer to its how these structures interact in complex problem procedure for establishing that hypothesis and say, “Dilation of intrahepatic and a portion of solving situations is an important challenge for AI research. extrahepatic b.duct was observed in X-ray, thus I concluded EHO. Because further a characteristic The approach we are arguing for sharpens some of pattern, (pattern 2> was observed in Cholangiogram, the questions surrounding how a novice learns to and because colicky abdominal pain was observed, I become an expert. We suggest that this issue is concluded b. stones.” Note that the system is facilitated by having some idea of what are the simply telling which of the rules was found to target structures that are being learnt. In the match for each specialist that was established. case of diagnosis, we suggest that learning to Some people may accept this explanation. At this become an expert diagnostician is the process of level D can be quite satisfactory. Suppose a compiling a structure such as D. Thus research into further question was asked, what sorts of knowledge structures does U consist of and what sorts of processes are powerful enough 42. “Why does a dilated IH b.duct indicate EHO?” to produce problem-solving structures such as D will significantly advance our ability to produce D as it stands cannot answer this question, since learning systems of considerable importance to the rationale for the rule was left behind in U, at expert system design. the time of compilation. But note that giving an explanation for this question does not need any ACKNOWLEDGMENT additional problem solving. We can simply associate with each piece of compiled knowledge in We thank Jack Smith, M. D., for helpful comments D a text string that explains its rationale. This on an earlier version of the paper. We acknowledge is possible because the answer to Ql is the contributions of Fernando Gomez to the approach patient-specific and thus requires problem solving outlined here. with D, while the answer to 42 is not patient-specific, and has already been generated in designing D. The preparation of this paper was supported by NSF grant MCS-8103480. The computing resources of Rutgers University Laboratory for Computer Science Since in a human expert D is built by learning Research were used in system development, and this processes from U, inadequacies in D are often was made possible by the Biotechnology Resources correlated with inadequacies in U. That is why we Division of NIH, under grant RR-00643. are suspicious of physicians who we feel cannot explain the basis of their conclusions by reference to deeper knowledge structures. Since U changes 353 REFERENCES ill Hart,. P. E. "Direction for AI in the dfhfles" SIGART Newsletter No. 79, January 121 Michie, D. "Hi h-road and Low-road programs" & Magazine 3: f (1982) 21-22. [31 Shortliffe Consultat:on~IMYC% Corn uter based Medical +ew- York: E'lsevier/North-Holland Inc., 1976. [41 Weiss, S. and C. A. Kulikowski "EXPERT: A system for develo ing consultation models" In Proc. IJCAI-79 To yo, K Japan, August, 1979. [51 Patil, R1 S. ti atient "Causal representation of illness for electrolyte and acid-base iagnosis" Ph. D. Dissertation TR-267 MIT Lab for Computer Science, Cambridge, Mass., October, 1981. 161 Rouse, W. B. and R. M. Hunt "A fuzzy rule-based model of human problem solving in fault diagnosis tasks" Coordinated Science Lab, Working Paper Illinois, Urbana, 1980. University ot [71 Pople, H. E. structure on "Heuristic methods for iEposing Szolovits (ed.!ll~$%%d I%E?!!mfc? s: Medicine Westview Press, forth&8'Z).- [8l Mittal, S., B. Chandrasekaran, and J. Smith "Overview of MDX - A svstem for medical [91 Comez,.F. and B. organization Chandrasekaran "Knowledge and distribution forMa;edxcxi $w~;;cs g$Els.8l )zJ$L$ya c - 1101 B. Chandrasekaran knowled e "Decomposition % w into knowledge sources: Of T;r:%ft ~!Z~~~n In Proc Fourth National Conf of Soc*orTutatlonal Studies Z ~had:;txleln~(~O1 Saskatooc 354
1982
17
109
INDUCTION OF CAUSAL RELATIONSHIPS FROM A TIME-ORIENTED CLINICAL DATABASE: AN OVERVIEW OF THE RX PROJECT Robert L. Blum Stanford University Department of Computer Science Margaret Jacks Hall, Stanford, California 94305 ABSTRACT The RX computer program examines a time-ori- ented clinical database and attempts to derive a set of (possibly) causal relationships. First, a Discovery Module uses lagged, nonparametric corre- lations to generate an ordered list of tentative relationships. Second, a Study Module uses a small knowledge base (KB) of medicine and statistics to create a study design to control for known con- founders. The study design is then executed by an on-line statistical package, and the results are automatically incorporated into the KB as a machine-readable record. In determining the con- founders of a new hypothesis the Study Module uses previously "learned" causal relationships. INTRODUCTION One of the most important reasons for accumu- lating patient data on computers is the possibility of deriving medical knowledge from the stored observations. The long range objectives of the RX Project are 1) to increase the validity of medical knowledge derived from large time-oriented data- bases containing routine, non-randomized clinical data, 2) to provide knowledgeable assistance to a research investigator in studying medical hypo- theses on large databases, and 3) to fully automate the process of hypothesis generation and explora- tory confirmation. The objective of this paper is to provide an introduction to the RX Project. The project is described in full detail in [l] and in summary form in [Z]. Methods for storing and displaying the clinical data are described in [3]. A discussion of the difficulties in drawing infer- ences from clinical databases appears in [4]. Designed to emulate standard methods of epidemiological research, the RX computer program is a prototype system for automating the discovery, confirmation, and incorporation of knowledge from large clinical databases. While the program is a research prototype, parts of it have been opera- tional since 1979, and have been demonstrated at national conferences. A medical researcher enters a causal hypo- thesis of interest into the RX Study Module, for example, "does aspirin decrease blood hemoglobin?" The Study Module uses a small on-line knowledge base (KB) of medicine and statistics to produce a study design of this hypothesis. In doing this, the Study Module also uses pre-computed information on the amount of data on each variable stored in the database. This study design is then executed by a statistical package using the appropriate data from the database. The results of this study are then automatically encoded into the on-line medical knowledge base in a machine-readable form. The KB also contains knowledge that was entered directly into it by clinicians. Both kinds of knowledge are accessible to the Study Module while it is designing a study. Now, instead of obtaining the initial hypothesis from a medical researcher, it is easy to imagine deriving it empirically from the database. Follow- ing this concept, a prototype Discovery Module was added to the RX Project in 1980. The Discovery Module combs through a subset of the patient data- base to derive an ordered list of hypotheses for exploration. These hypotheses are studied by the Study Module as though they had been entered by a researcher. RX consists of five major parts: the Database, the Discovery Module, the knowledge base, the Study Module, and a statistical analysis package. A brief description of each follows. THE TIME-ORIENTED DATABASE The database we use is the ARAMIS database, the American Rheumatism Association Medical Information System, developed at Stanford University and imple- mented on TOD, a Time-Oriented Database System [5] [6] [7]. A recent review of clinical databases appears in [8]. Our research, so far, has been done entirely on a subset of the ARAMIS Database collected at the Stanford University Division of Immunology Clinics and containing the records of fifty patients with systemic lupus erythematosus. Each patient's record consists of a matrix of values for a set of attributes that may be recorded each time the patient is seen in the clinic. Values for several hundred attributes can be recorded in ARAMIS. The attributes include signs, symptoms, lab tests, therapies, and indices of patient func- tional status. In general, the time intervals between clinic visits are not uniform, and patients are not on treatment protocols. TOD is implemented in PL/l; ARAMIS is stored on an IBM 370/3081 computer at the Stanford Univer- sity Center for Information Technology. On the other hand, the RX Project is implemented at two other computer facilities at Stanford University: SUMEX-AIM and SCORE. SUMEX-AIM features a DEC dual 355 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. processor Kl-10 running the TENEX operating system. SCORE has a DEC20/60 running TOPS-20. Data trans- fer from ARAMIS is done by magnetic tape. All RX computer programs are written in INTER- LISP, a dialect of LISP, a language highly suited for knowledge manipulation. The RX source code with knowledge base comprises approximately 200 disk pages of 512 words of 36 bits each. THE DISCOVERY MODULE The Discovery Module produces hypotheses of the form "A causes B". The hypotheses denote that in a number of individual patient records "A pre- cedes and is correlated with B". The current Discovery Module uses lagged nonparametric corre- lations across variables but within individual patient records. The p-values of the correlations across patients are then combined to yield a score that is used to order the list of hypotheses. Knowledge from the medical KB is used to determine the range of time lags examined. THE KNOWLEDGE BASE The leitmotif of the RX Project is that deri- vation of new knowledge from databases can best be performed by integrating existing knowledge of relevant parts of medicine and statistics into the medical information system. In the RX computer program the medical KB determines the operation of the Discovery Module, plays a pivotal role in the creation of subsequent studies in the Study Module, and finally serves as a repository for newly created knowledge. The medical KB grows by automatically incorporating new knowledge into itself. Hence, it is designed in such a way that relationships derived from the database are translated into the same machine-readable form as knowledge entered from the medical literature by clinicians. The main data structure of RX's knowledge base (KB) is a tree representing a taxonomy of relevant aspects of medicine and statistics. Each object in the tree is represented as a schema con- taining an arbitrary number of property-value pairs. The RX KB contains approximately 250 schem- ata pertaining to medicine, 50 pertaining to statistics, and 50 system schemata. The medical knowledge in the RX KB covers only a small portion of what is known about systemic lupus erythematosus and limited areas of general internal medicine. The present KB is merely a test vehicle; its size is 50 disk pages or 120,000 bytes. The most important class of properties in the schema corresponding to each medical object is that specifying the causal relationships of an object to other objects. Causal relationships are stored between objects using an "effects" and an "affected-by" property list for each object. The resulting causal model is a directed cyclic graph; that is, the representation allows for the possi- bility that A causes B causes A with appropriate time lags. Besides the simple fact that A may affect B, each causal relationship is represented by a set of features as below. < intensity, frequency, direction, setting, functional form, validity, evidence > The entire causal relationship is machine-read- able. This enables it to be used automatically by the Study Module during subsequent studies. The causal relationships in the KB can also be inter- actively displayed in a variety of forms. All paths connecting two nodes may be displayed, or the details of a particular causal relationship: its mathematical form, the evidence supporting it, or its distribution across patients. THE STUDY MODULE The Study Module is the core of the RX algo- rithm. It takes as input a causal hypothesis obtained either from the Discovery Module or inter- actively from a researcher. It then generates a medically and statistically plausible model of the hypothesis, which it analyzes on appropriate data from the database. In creating a study design the Study Module follows accepted principles of epidemiological research. It determines study feasibility and study design: cross-sectional versus longitudinal. It uses the KB to determine the confounders of a given hypothesis, and it selects methods for con- trolling their influence: elimination of patient records, elimination of confounding time intervals, or statistical control, The Study Module then determines an appropriate statistical method, using knowledge stored as production rules. Most studies have used a longitudinal design involving a multiple regression model applied to individual patient records. Results across patients are com- bined using weights based on the precision of the estimated regression coefficient for each patient. The steps in the Study Module appear below. 1) Parse the hypothesis and determine the classi- fication of variables in it. 2) Determine the feasibility of the study on the database. 3) Select confounding variables and causal dominators using the KB. 4) Select methods for controlling the confounding variables. 5) Determine proxy variables. 6) Determine eligibility criteria. 7) Create a statistical model of the hypothesis using knowledge from the KB. a) Select an overall study design. b) Select statistical methods. c) Format the appropriate database access functions. 8) Run the study. a) Fetch the appropriate data from eligible patient records. b) Perform a statistical analysis of each patient's record. c) Combine the results across patients. 9) Interpret the results to determine medical and statistical significance. 10) Incorporate the results into the knowledge base. 356 THE STATISTICAL PACKAGE: IDL Until July 1980, all statistical analyses were performed using SPSS as a subroutine. Currently all statistical analysis is done using IDL [9]. Written in INTERLISP, IDL makes available fast numerical computation, matrix manipulation, and a variety of high-level primitives for statistical computation. To the basic IDL package we have added fifty disk pages of other statistical routines The Study Module writes the study design to disk, then calls IDL. IDL reads the study design, exe- cutes it, writes the results to disk, then calls the Study Module. The method of analysis we have used most often involves performing a separate multiple regression on each patient record, then combining results across patients. Our method of analysis accounts for autocorrelation and for differing quantities of data across patients. RESULTS, AVAILABILITY, AND LIMITATIONS The current RX system was applied to a sample database containing the longitudinal records of 50 patients with systemic lupus erythemotosus followed for an average of 50 clinic visits. Several well- known effects of the steroid drug prednisone were confirmed by the Study Module. The Study Module automatically incorporated these new links and details of the studies into the KB in the format previously discussed. The RX computer program is currently only a research prototype. It is not available outside our lab except for program development. We must emphasize that any methodology that draws causal inferences based on nonrandomized data is subject to an important limitation: unknown covariates cannot be controlled. The strength of a particular knowledge base lies in its comprehensiveness, but even so, it cannot guarantee nonspuriousness. Only through repeated studies, particularly through experimental manipulation of the causal variable, can a given result become more definitive. ACKNOWLEDGEMENTS I am grateful to Guy Kraines, Kent Bailey, and Byron William Brown for their assistance with the statistical models, to Gio Wiederhold for project management and conceptual guidance, to Ronald Kaplan and Beau Shiel for their assistance with IDL, and to James Fries, Dennis McShane,Alison Harlow, and James Standish for kindly providing access to the database. Funding for this research was provided by the National Center for Health Services Research through grants HS-03650 and HS-4389, by the National Librarv of Medicine through grant LM-03370, and by the Pharmaceutical Manufactureres Association Founda- tion. Computation facilities were provided by SUMEX-AIM through NIH grant RR-00785 from the Biotechnology Resources Program. Clinical Data were obtained from the American Rheumatism Associa- tion Medical Information System, supported by grants AM-21393 and HS-03802. REFERENCES Cl1 PI c31 c41 t-51 [61 [71 P31 191 Blum, Robert L.; Discovery and Representation of Causal Relat?onships from a Large Time- Oriented Clincial Database: The RX Project, Ph.D. Thesis, Stanford University, January, 1982. Blum, Robert L.: Discovery, Confirmation, and Incorporation of Causal Relationships from a Large Time-Oriented Clinical Database: The RX Project. Computers and Biomedical Research, 15:2, 164-187, 1982. Blum, Robert L.: Displaying Clinical Data from a Time-Oriented Database. Computers in Biology and Medicine, 11:4, 197-210, 1981. Blum, Robert L. and Wiederhold, Gio: Inferring Knowledge from Clinical Data Banks Utilizing Techniques from Artificial Intelligence. "Proc. 2nd Annual Symp. on Comp. Applic. in Med. Care", 303-307, IEEE, Washington, D.C., November 4-9, 1978. Weyl, Stephen; Fries, J.; Wiederhold, G.; Germano, F.: A Modular Self-Describing Clinical Databank System. Computer and Biomedical Research 8:3, 279-293, June, 1975. Wiederhold, Gio; Fries, James F.: Structured Organization of Clinical Data Bases. "AFIPS Conference Proceedings" 44: 479-485, 1975. Wiederhold, Gio: Database Design, McGraw-Hill, 1977. Wiederhold, Gio: Databases for Health Care, Springer-Verlag, 1981. Kaplan, Ronald M., et al.: The Interactive Data-analysis Language Reference Manual. Xerox Palo Alto Research Corp., 1978.
1982
18
110
USER PARTICIPATION IN THE KEASONING PROCESSES OF EXPERT SYSTEMS Martha E. Pollack, Julia Hirschberg and Bonnie Webber Department of Computer and Information Science The Moore School University of Pennsylvania, Philadelphia, Pa. 19104 To date, user participation in the reasoning processes of expert systems has been largely limited to probing expert reasoning or adding limited information. The user may only ask why the system requested more information and how it arrived at its advice. Research into extending the capabilities of expert systems [1,2,3] has so far failed to recognize the need to permit the full range of interactions possible when humans engage in the normal giving and getting of advice. input and human expert response. From the radio show we collected twelve and one-half hours of user-expert interaction, involving 120 callers. An examination of these protocols reveals a regular pattern of interaction, which we describe as negotiation, the process whereby people arrive at a conclusion by means of a discussion. Rarely does a caller simply state a problem and passively listen to the expert's response. Rather the caller actively participates in the definition and resolution of the problem. Caller and expert must often negotiate to determine the statement of a problem the expert can solve and the statement of a solution the expert can support and the caller accept -- and, ideally, understand. They may also need to negotiate a common understanding of terminology, a common set of world or domain beliefs, or an acceptable justification for the solution. user model, and then, based on that analysis and his own motivation(s), determine his own goal(s), strategy(s) and move. some When the user decides to participate s /he has motive for doing so and some goal s/he hopes to achieve. Motivation answers the question "Why does the user decide to participate?", while goal answers "What does the user hope to achieve through participation?". The user then adopts some strategy by which s/he attempts to achieve the goal(s) s/he has set; so, strategy answers the question "How does the user%tempt to achieve the goal?". This strategy is realized linguistically in an utterance, the user's move. For example, a user may be motivated by his/her surprise at an expert's advice: s/he may have thought of and rejected an answer because s/he believes it violates some specific constraint s/he wants met. His/her goal is then to gain assurance that the the expert's response meets that constraint. S/he might try to achieve this goal 358 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. by highlighting the constraint; this she might do with the question "But that would be illegal (or immoral, or expensive, or fattening), wouldn't it?". The expert, upon hearing the user's utterance, must recognize the user's intent in participating. Determining a user's strategy, goal and motivation is an important aspect of natural language understanding research; recent work on recognizing speaker's intentions [4,5] can help to explain this recognition process. However, we believe that in interactions with an expert system, the range of user Lnotivations, goals and strategies is more constrained than in unrestricted discourse, thus facilitating the implementation of an expert system component to recognize intentions. After recognizing the user's intentions, the expert must choose a suitable response and realize it linguistically. The latter problem has been studied in language generation research [6,7,8]. The former, deciding what response to provide to what form of participation is a goal of our further research. The following exchange between a caller and the radio expert illustrates the negotiation process described above: 1. L: "Hi, H=-ry, this is L." 2. Expert: "Welcome, L." 3. L: "Oh, hi, I have a question about charitable deductions." 4. Expert: "Sure." 5. L: '%/e give, uh, let me see, my husband's income this past year is roughly $24,000, not quite, and we gave maybe $1400 to our church and other religious organizations that we have receipts for." 6. Expert: "Right." 7. L: "Now we also give to Sunday School, to the benevolent offering, to this and that that just is out of pocket money that we don't have receipts for. Given the amount of money that we give that we do have receipts for, what is the amount that we would like be able to claim on our income tax that IRS won't call us in?" 8. Expert: "Oh, you're asking for a figure that ,?obody can really give you. How much was that, do you know?" 9. L: "Well in the past we've declared around one hundred and they've never called us in, but we give almost double that." 10. Expert: "Well if you give that much, and you have some evidence that you give that much, that's fine. If you put $2 a week, for example, in an envelope, that comes to $100; if you put $3, you've got $150, and if that's your regular contribution, deduct it." 11. L: "But (pause) the thing is, like Sunday School offering they don't keep track of who gives what, and they have a benevolent fund when we have communion, you put $5, you don't keep track of who gives what, that's what I'm talking about.' 12. Expert: "What I would suggest that you do, is hereafter keep a record of what you have given. All you can do now is estimate that amount to the best of your ability." 13. L: "IJow my records would be okay now I mean (pause)" 14. Expert: "Absolutely, your records will stand up too." 15. L: "Oh, I didn't know that!" 16. Expert: "Oh sure. What about the guy who goes out and spends money on a taxi cab and leaves the guy a tip? That tip is deductible and you don't get a receipt for a tip. You might even not get a receipt from the taxi cab." 17. L: "Oh, I didn't know that. I thought it was just what we had a receipt from the church for that (pause) " 18. Expert: "Oh, no." 19. Expert: "Plus the fact, the mere fact that the IRS calls you in and even desires to disallow part of your deduction , you're not required to agree with them." 20. L: "Right (pause) " 21. Expert: "You stand up for your rights. Whatever you give to charity, you claim." 22. L (laughing): "I don't want the hassle, of an, of an (pause) V 23. Expert: "Well, now you've got another point, and I think at times the service counts on the fact that people don't want the hassle. And maybe we as Americans have to stand up a little bit more and claim what's due us." 24. L: "Okay, I'll keep my own record of it from here on in." 25. Expert: "That's right, and estimate the best that you can based on what you did in the past, and your record of what you do in 1982 if you're ever called in will assist you in that. Okay?" 26. L: "Okay, now in the past we've declared $100 of un -- " 27. Expert: "But if you feel it's closer to $2(JJ, use the $200." 28. L: "Now could I jump from $100 to $200 in one year?" 29. Expert: "Absolutely!" 30. L: "I could?" 31. Expert: "Absolutely!" 32. L: "Okay?" 33. Expert: "All right?" 34. L: "Okay, thank you." In (l-2) L and the expert initiate the dialogue by identifying themselves. This mutual introduction is important in beginning the process by which the expert develops a model of the caller -- and the caller of the expert. In the radio show it often establishes whether or not the caller has previously used the system, i.e., is already known to the expert. It also establishes initial rapport between caller and expert. The exchange in (3-4) shows L limiting the domain of her query before even posing it. The expert recognizes in L's initial participation in the reasoning process an implicit desire for reassurance that her question will be appropriate, and he provides such reassurance in (4). In (5) L continues to participate in the reasoning process by offering information she thinks the expert will need in order to answer her query -- still before the query is made: she believes her husband's 359 income and the amount of charitable donations for and tells her she has no alternative but to which they have receipts will be important factors estimate their amount. However, he then offers in the expert's calculations. Her information further advice for which L has not specifically adds to the expert's model of his caller, by asked: hereafter she should keep her own record providing family income level and by suggesting of such donations to prevent a recurrence of her that L is not employed outside the home. dilemma. In (7) L finally poses her first direct query. She and her husband make charitable donations for which they do not have receipts and she wants to know, given the amount of donations for which they do have receipts, how much of the unreceipted donzions they can deduct from their income tax. Note that in making this query, the type of query which existing systems might be expected to handle, L indicates two important points to the human expert which existing expert systems would not be capable of recognizing. First, she believes that the amount she can deduct for donations for which she has no receipts is a function of the amount of charitable donations for which she does have receipts ("Given the amount..."). Second, she imposes a user-specific constraint upon the expert: she does not want to be called in by the IRS to explain her deductions. This constraint is added to the expert's user model but not addressed directly until (19). In (13-18) L and the expert negotiate further over the nature of IRS-acceptable records: note L's skepticism (13,15) which the expert recognizes as a request for additional reassurance. He provides this first by simple affirmation (14), next (16) by a (18) by simple (17). hypothetical example and f inally rejection of her preconcep tion of In (19-23) the expert finally addresses L's previously expressed constraint of not being called in by the IRS. He tries to convince L th her constraint may be invalid by informing her that she need not accept an IRS challenge of her deductions (19). In response, L indicates throu a pause after ostensible agreement (20) that she is unhappy with this argument, so the expert provides additional justification in (21): L ha certain rights which she should exercise. However, in (22) L explains the purpose of her gh In response to (7), the expert first informs L that her initial query is unanswerable. However he goes beyond the simple rejection of her query to seek information which will help him answer what he perceives to be her real question: how can she deduct charitable donations for which she has no receipts. He must ascertain L's motivation behind a question which, as the expert says, no one can answer. Attempting to satisfy L's implicit goal, the expert begins to elicit information L had not realized was pertinent. First he asks for the amount of donations for which she has no receipts.(8). In providing this information in (9) L continues to participate in original constraint: she does not want the "hassle" The of being called in by the IRS. expert accepts her motivation as valid (23), but Eurther justifies his advice by claiming that following it would benefit not just L but all who deal with the IKS. In (24-31) L and the expert finish the discussion, reaching some agreement over how much of the expert's advice L intends to accept. In (24) she indicates her readiness to accept part of the expert's advice: that she keep her own records from now on. Note that the advice accepted is significantly different from the The the reasoning process by indirectly reiterating advice requested in her original query. expert reiterates in (25) his further advice that her initial constraint: not being called in by she estimate for this year's income tax and adds the IRS. In (10) the expert gives a direct answer the unsolicited information that her 1982 records to what is in his view her true query: she can deduct the full amount she has donated. He also will assist her if the IRS questions deductions in the future. provides spontaneous justification for his In (26,28) L finally accepts the response using a hypothetical suggested by his notion of increasing her estimate, but asks the user model -- L is clearly a church-goer -- by expert for assurance that such a course of action noting that a regular , periodic contribution of a will not cause problems. In (29-34) expert and reasonable amount will be acceptable to the IRS user play out the final stage or end game of the even without written receipts. negotiation process, one of expert reaffirmation, user desire for final reassurance, and expert In (11) L shows her dissatisfaction with his response, suggesting that the expert has not realized that the contributions she is worried about were not regular like the ones he noted in his hypothetical. She signals her dissatisfaction with the words "But (pause) the thing is..." and II . ..that's what I'm talking about." The expert's prior justification has led her to worry that he has failed to realize the facts in the case. She wants assurance that he has used all these facts in determining his advice, so she reiterates them. In (12) the expert recognizes her concern and provides a more suitable response: he accepts the information that her contributions are irregular concern that the user is truly satisfied with the advice given. Clearly, existing expert systems cannot support such flexible exchanges. To do so requires a system capable of (i) detecting the user's explicit, implicit or indirectly stated goals, desires and expectations; (ii) performing and (iii) reasoning consistent with them; providing a satisfactory response. Most indirect user queries cannot be recognized as such by current systems. For example, the technique of reiterating a fact used by L above (11) is a 360 typical way for a human to signal to an expert a desire that he demonstrate his use of that fact; current systems require explicit user query. Similarly, reasoning consistent with indirectly stated goals and desires is impossible in current systems. Neither will current systems spontaneously answer questions not explicitly asked of them. When the expert realizes that L's concern is her lack of written records, he not only explains a viable alternative for the present, but he also provides additional information which will aid her in the preparation of future returns. Providing alternate forms of justification in response to user dissatisfaction is also unsupported by existing systems. In L's case, the expert's initial justification of the viability of personal records is unsatisfactory, while his subsequent statement that there is no alternative and, finally, his hypothetical example, eventually satisfy her. The behavior exhibited by L is typical of the behavior we have observed in our protocols: people are often indirect in the their dealings with a human expert. If artificial expert systems are to become widely used, their designers must acknowledge the fact that people actively -~ participate in the definition and resolution of their own problems. While people are not always able precisely to define the problems which cause them to seek information or advice, they do often have preconceptions about what a solutionTo their problems involves or what constraints it must satisfy. The negotiation model we have described provides a basis for characterizing what a user may be trying to do at any point in his/her interaction with an expert and identifies the options that are, in turn, available to the expert. ACI(NOWLEDGEMENTS REFEKENCES [l] Swartout, W. R. "Producing Explanations and Justifications of Expert Consulting Programs," M.I.T. Technical Report, LCS TK-251. [2] Clancey, W. J. "The Epistemology of a Rule-Based System: A Framework for Explanation," Stanford Technical Report, STAN-CS-81-896. [3] Hayes, P., and Reddy, R. "An Anatomy of Graceful Interaction in Spoken and Written Man-Machine Communication," Carnegie-Mellon Technical Report CMU-CS-79-144. [4] Cohen, P. R., Perrault, C. R. and Allen, J. F. "Beyond Question-Answering," Bolt Beranek and Newman Report No. 4644. [5] Brown, G. P. "Characterizing Indirect Speech Acts," American Journal of P-P- Computational Linguistics, 6:3-4. --- 161 McKeown, K. R. "Generating Natural Language Text in Response to Questions about Database Structure," Ph.D. thesis, Univ. of Penn., 1982. [7] McDonald, 0. D. "Natural Language Production as a Process of Decision Making under Constraint," Ph.D. thesis, M.I.T., 1980. [8] Appelt, D. "Planning Natural-Language Utterances to Satisfy Multiple Goals," SRI Technical Note 259, 1982. We would like to thank Barbara Grosz and Aravind K. Joshi for their most helpful comments on this work.
1982
19
111
KNOWLEDGE MANAGEMENT: A PRACTICAL AMALGAM OF KNOWLEDGE AND DATA BASE TECHNOLOGY Charles Kellogg System Development Corporation Santa Monica, California 90406 ABSTRACT This paper describes the central features of a system designed for the management of large amounts of application specific knowledge. The Knowledge Manager(KM-I) employs distinct software and hardware processors to implement: 6 A file of general knowledge and an associated reasoning engine 6 A file of specific knowledge and an associated searching engine We present our reasons for believing why this can be an effective strategy for realizing many practical knowledge based/expert system applications that lie in a large overlapping area between practical AI and advanced data management technology. We then outline the major features and components of the system and discuss the range of intended applications. I. INTRODUCTION For several years knowledge-based system (KBS) and data management system (DMS) technologies have developed in parallel but, with few exceptions, independently of each other. DMS systems are an outgrowth of early attempts to build practical systems for commercial and military information processing applications. The field started with the development of file and record management systems to support inventory control, general ledger, and similar applications and has progressed through the development of generalized DMS systems supporting hierarchical and network data models and, more recently, relational data models [I]. KBS systems, on the other hand, have stemmed from attempts to apply artificial intelligence research in knowledge representation and reasoning, to domains where typically small amounts of domain-specific knowledge obtained from experts in the domains, have been encoded into computer programs used to provide problem solving assistance to professionals working in the domains. Examples of seminal systems of this kind are DENDRAL, MYCIN, and PROSPECTOR. Practical KBS systems are now appearing in both commercial (e.g., Rl) and military (e.g., SU/x) applications[2,7]. A number of near-term applications should be able to benefit from a merger of KBS and DMS technologies. In this paper we discuss our strategy--currently implemented at SDC in the KM-I prototype--for producing knowledge management systems that integrate capabilities of the (heretofore) separate technologies. II. KNOWLEDGE REPRESENTATION Most existing knowledge-based systems utilize production rules, frames, semantic nets, or semantic hierarchies for knowledge representation. These systems are usually implemented in Lisp, and all knowledge (both general and specific) is encoded into a single large list structure. Knowledge accessing and reasoning is effected by heuristic programs that work against this large list structure. Consequently, KBS applications tend to be limited by the size of LISP list structure available for encoding knowledge. Data managenent systems in turn employ a variety of record-based data structures (nets, hierarchies, and relations) for representing specific knowledge but typically lack any form of reasoning component or means of dealing with general information. They are, further, usually programmed and optimized in conventional programming languages and microcode to make efficient use of sophisticated dictionaries and directories pointing into very large volumes of information systematically arranged on low-cost storage media. Our approach to realizing a knowledge management system is based on: &A solid theoretical foundation that supports the underlying knowledge representation, the means for reasoning with it, and a useful distinction between general and specific knowledge. 6 State-of-the art database machines and Lisp machines that allow both powerful and cost-effective systems. 306 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. o A performance oriented reasoning engine that can derive plans from a general knowledge base to intelligently control database machine search of a specific knowledge base. The theoretical foundation is the first order predicate calculus (with functions)-a formalism that has been both praised and condemned throughout the history of AI. Although we cannot convince critics in this short paper, some idea of the descriptive adequacy and practical utility of this calculus for knowledge and data management can be found in recent papers on "Lo ic and Data Bases"[4,5] and "Logic Programming"- P 0 Predicate calculus theory includes both a theory of reasoning(proof theory) and a theory of semantics(mode1 theory). The former is the basis for our reasoning engine and the latter is practically realizable through database structuring used in relational data models (and database machines that implement them). In KM-I, general knowledge is represented as a set of logically formattted assertions(premises or logical rules) that constitute a llrnicrolt theory about some aspect of the world, while specific knowledge, formulated 9s relation tuples, provides a model or interpretation of the "micro" theory. The distinction between general and specific knowledge is sometimes made using the terms "intensional" and "extensional" after Carnap [7]. Thus the phrases "intensional database" for formalized general knowledge and "extensional database" for specific knowledge are sometimes employed. The conceptual adequacy of our knowledge representation thus derives from logical theory but the practical utility of our approach must be judged on the basis of empirical performance of its components, separately and in combination, as discussed in Section III. III. THE KM-I SYSTEM Current and future configurations of KM-I are illustrated in Figures 1 and 2. In the current version of KM-I the reasoning engine and general assertions are implemented within Interlisp and operate on a Xerox 1100 Scientific Information Processor (Lisp Machine). The searching engine is a relational data base machine that uses specialized hardware to cost effectively implement the relational model of data--the Britton Lee IDM(Intelligent Database Machine)- 500[8]. Both reasoning and searching engines are designed to be general purpose and application independent within the constraints of their underlying knowledge representations. At present communication between these two specialized machines must be mediated by a VAX 11/780. The 1100 communicates with the VAX by way of an Ethernet connection, while the IDM communicates (temporarily) by way of a serial, direct line to the VAX. The two VAX programs shown in Figure 1 convert data access strategies formulated by the reasoning engine in IDL(Intelligent Data Language) into IDM commands and, for the other direction, convert data retrieved by the IDM into Lisp Machine format. Explicit specific assertions(facts or tuples) contained in an extensional data base are stored on a pair of 80MB disks that are part of our IDM installation, while general assertions(premises or logical rules) are stored within the 23 MB disk that is part of the Xerox 1100. (This is sufficient storage for thousands of rules and millions of tuples.) User queries are submitted in an English- like form to KM-I's reasoning engine which translates the queries into the required logical formalism. The file of general knowledge is then employed to construct an "inference plan" (skeletal derivation). "search/compute Then KM-I produces a plan" (access and procedure- activation strategy) to be used by the searching engine for locating and manipulating relevant specific knowledge. Found information is fed back from the searching engine to the reasoning engine which uses it to instantiate the skeletal derivation and to produce an answer and an "evidence chain" (the chain of reasoning steps that leads to the answer). Figure 1. KM-1 Current Configuration Figure 2. KM-1 Future Configuration The key words "find", "what-if", llgiven --- find ---" are used to trigger different forms of reasoning. "Find" queries produce backward reasoning steps that result in skeletal derivations and data base values that support desired conclusions (goals), while "what-if" queries cause forward reasoning from assumptions through the general rules and the database of specific facts. A "given--- find---" query such as the following example from an implemented shipping/receiving application: Given Emperor ships heavy freight to Ambassador and Excelsior ships heavy freight to Excello find that Emperor and Excelsior are competitors in Valley Acres results in reasoning from assumptions toward goals and vice versa(sometimes called "middle- term reasoningll see[9]). In the above example, the conclusion that Emperor and Excelsior compete with each other depends on the assumptions as well as on stored knowledge, both general and specific. IV. THE KM-I REASONING ENGINE Over the past few years we have implemented a series of deductive processors designed specifically to support question answering(see refs. [9] through [16]). The reasoning engine employed in KM-I is the latest version of our DADM( Deductively Augmented Data Management) deductive processor. The DADM deductive processor has been designed to use relational data base systems for the storing and retrieving of specific knowledge. To support knowledge management, the design is being augmented with new acquisition and verification capabilities to more fully support knowledge base administrators. It will, for example, provide for general-knowledge extensions to existing relational databases through their schemas as well as support the acquisition of combined general and specific knowledge bases from their inception. DADM has been used to implement and experiment with knowledge manipulation in a number of application areas including bibliographic retrieval, alternative routfl finding, and command and control. Its deductive pathfinding and planning strategies have proved effective in quickly locating premises relevant to user requests, in constructing required derivations, and in producing intelligent data access strategies. The first KM-l application will be a "Manager's Assistant" which combines expert planning knowledge and a corporate data base for the assistance of corporate project monitoring and planning. V. NEXT STEPS We expect to bring up several other applications in the current KM-I environment but at the same time we are reviewing other Lisp and Database Machine combinations. Eventually we plan to move to the configuration illustrated in Figure 2. This will provide a "standalone" knowledge management capability not dependent upon a large mainframe computer. While such a configuration is not now inexpensive, within the next few years such a combined system could be achievable at costs comparable to those of current professional workstations. [II [21 [31 [41 [51 161 [71 6. REFERENCES Codd, E. F., A Relational Model of Data for Large Shared Data Banks, Comm. ACM, Vol. 2, No. 6, June 1970. -- Barr, A., Feigenbaum, E., The Handbook of Artificial Intelligence, HeurisTech Press, 1981. Webster, R., Miner, L., Expert Systems: Programming Problem-Solving, in Technology, Vol. 2, Jan/Feb 1982. Gallaire, H., Minker, J., eds., Logic and Data Bases, Plenum Press, 1978. Gallaire, H., Minker, J., Nicholas, J., eds., Advances in Data Base Theory, Volume 1, Plenum PrGsx81. Kowalski, R. A., Logic for Problem Solving, North Holland Pub. Co., 1979. Carnap, R., Meaning and Necessity, Univ. of -- Chicago Press, 1956. [8] IDM 500 Software Reference Manual, Version 1.3, -- Britton Lee, Inc., Los Gatos, Ca., Sep. 1981. Kellogg, C., Travis, L., Reasoning with Data in a Deductively Augmented Data Base System, in Advances in Data Base Theory, I, Volume Gallaire, H.,J.nker, eds., Plenum Press, 1981. Kellogg, C., Research on Deductive Inference for Large Data Bases, System Development Gpaon, TM-6263, 1979. Kellogg, C., Klahr, P., Travis, L., Deductive Planning and Pathfinding for Relational Data Bases, in Logic and Data Bases, Gallaire, H., Minker, J., eds.,lc, 1978. Kellogg, C., Klahr, P., Travis, L., Deductive Methods for Large Data Bases, Proc. Fifth - ~ International Joint Conference on Artificial - Intelligence, MIT, 1977. 306 Kellogg, C., Creating Inference Plans for Deductive Access to Relational Data Bases, in Proceedings of Workshop on Logic and Data Bases, C.Ex.T., Toulouse,France, 1979. Kellogg, C., Klahr, P., Travis, L., A Deduc- tive Capability for Data Management, in Systems for Large Data Bases, Lockemann, P.C., Neuhold E.7 (eds.), North-Holland Publish- ing Co., 1976. Klahr, P., Kellogg, C., Travis, L., A Deductive System for Natural-Language Question-Answering, in Natural Language Based Computer Systems,zL., Ed., Hanser-Verlag, 1980. Klahr, P., Planning Techniques for Rule Selection in Deductive Question-Answering, in Pattern-Directed Inference Systems, Waterman D. and P. Hayes-Roth, Eds., Academic Press, 1978. 309
1982
2
112
Toward a Learning of Object Models Using Analogical Objects and Verbal Instruction Norihiro Abe, Fumihide Itoh and Saburo Tsuji Department of Control Engineering Faculty of Engineering Science Osaka University Toyonaka Osaka JAPAN” Abstract In this paper an attempt of analogical learning method by verbalism is shown in order to create a model for an identification of unknown objects. When we expect a computer to recognize objects, the models of them must be given to it, however there are cases where some objects may not be matched to the models or there is no model with which object is compared. At that time, this system can augment or create new descriptions *by making use of explicit verbal instructions. 1. Introduction We have reported the story understanding system which uses both linguistic and pictorial information in order to resolve the meaning of given sentences and images[ 11. By this research, we could have belief that correct meaning of given sentences is obtained if the relations among noun phrases, which correspond to objects in the images, consistent with the relations observed among objects in the picture. The fact that this identification of objects and interpretation of the given sentences supplements each other simplifies both the detection of objects and disamibiguation of word sense or prepositional groups. In spite of these effects, this formalism has a defect that it requires additional knowledge sources from ; the system, which are the meaning of verbs and the model of objects that will appear in the im.ages. All of models of objects or actors that are supposed to appear in the picture must be given to our system in order to achieve its purposes. But it is not easy for us to store all of such necessary models in the computer. If a person who does not know’well about the details of this system wants to interact with this system, he will give up using this system, as he knows nothing of the representation of models in the computer. To make matters worse, there are quite many variations in real objects which we will encounter in the real world. For example,we can see various type of houses. In the traditional AI system, a generic model is utilized to identify such class of various objects. But it is not easy for such a system to discriminate idiosyncrasy of various objects. Fig. 1 shows a part of sample story used to experiment its story understanding capability[ 11. Even if the system is supposed to be given a generic model (for example, BOGLE) that represents both OBAQ and OJIRO, the system will not be able to discriminate them. The system needs so-me proper models for OBAQ and OJZRO. But if a new character which has some similar points to OBAQ and OJIRO apperes in the story, some modification to this BOGI,E model is rcquircd. Even if we could give the system some suitable generic models, it is not easy to augment the description of generic models so as to reprets;ft ;:41spossible features that various objects in will have. Thus generalization process could not be accomplised in advance, but should be achieved through the experience. In order to realize this type of learning, there are two important problems to be solved. First is Ian explanation capability. Unless a capability to convey one’s obscure points to his partner is given to the system, it is difficult for the system to obtain good instructions from its partner. Although it is needless to say that a facility to interpret a natural language is required from the system, the facility to state a level of its understanding on objects is also inevitable. Concerning to this point we have already reported it in [2], then it will be abbreviated here. Second is that from what kind of levels of knowledge state the system should start its learning process. Should an initial state of knowledge be given in forms of an inner representation or be explained in some natural language? We select the former approach by just the following reason. We think it quite difficult to give a clear view to unknown object without referring models. So we restrict a class of objects learned by our system to the group objects in which the system can obtain clear views concerning to their similarity1 through the comparison with similar examples. But the assumption is not required that examples should be different in only one or two points at most from the unknown object. Many discrepancies between the object and its models are permitted to exist because such differences can be explained explicitly in the language by a teacher. And through a cognition of analogical or discrepant points of objects belonging to the same concept class, a generalization process is invoked that creates a common concept to them. 2. Concept of Analogy When we think that there is an analogical point among things, we have already known the reason for an existance of the analogy. On some occasion, the analogy means a similarity in a shape or a color or a coincidence of a location, and in another case it implies a similarity in substructures of the objects. Throughout our life, we acquire a way how to find a similarity among many things. A matching mechanism we have uses some intrinsic attributes of objects when they are compared with those objects. Our pattern matcher does not examine descriptions of objects in a uniform way like the traditional one for the abstract learning, but it must properly change its estimation on similarity according to the objects. Considering the pattern matcher like this, an another question arises that a representation of objects should be changed according to objects. It may be true that there are their own representations for each class of objects, say birds, vehicles or houses. In fact, some part of our knowledge can be described in forms of procedures, and others can be represented as in tables or graphs. Though we also believe that each class of objects should be described in their own representations, it is quite difficult to compare things which are described in He gives OJIRO the apple. Fig.1 A portion of the story. 362 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. different ways. From the reason like this, we have determined a unique form of representation as reported in [ 1,2], which can be supposed to be applicable to all objects. By these settlements, it becomes possible for the system to make correspondences between the descriptions of objects and the expressions of sentence. Still more, it can be expected that a dialog between the system and T progresses smoothly and that the teacher can infere the method used in the interpretation of language and in that of the description of objects that the system owns. 3. Description of models The model description in this paper is the same one shown in the ‘paper[2], then we would like abbreviate details here. Fig. 2 shows a frame model of OBAO. where the IMAGE slot needs1 an explanation for it 1s newly introduced slot in this paper. It has a pointer to an instance image of OBAQ, and by tracing this pointer the system can get a real image of OI3AQ. This is necessary for compensating an insufficient parts of the representation of the system. A concrete example will be shown later. 4. Basic strategy for learning Our learning method does not require a forced arrangement of samples, but starts its learning from seeing an example, however it needs the existence of frame representations of models which are used in comparison with an unkown object. Then it tries to generate a model for the unknown object by referring to an analogical model and using a teacher’s indication, and simultaneously it augments the concept trees of objects. At that time, the first key for a detection of analogy is assumed to be in contiguous relations between subparts and locations of subparts of objects. The mathematical models on analogy extraction utilize abstract relations between geometrical figures, however it is too abstract to obtain the same result as we shall reach . When we are told that a unknown object is similar to a certain object among various points of view we usually expect that many substructures having similar features will be found1 in the same location as the referred object. Of course, there are many examples that a resemblance in a location is not useful but prevents the program from achieving a correct detection of analogy. At that case, the teacher should explicitly tell the program to ignore that method and to use other methods such as similarity in relations or shapes or colors of objects. As we usually employ these variety way to detect analogy and record these experiences into our memory, we can easily decide what method should be used to compare things. We can not say that the program has learned something until these mechanisms recording a standards to compare things into memory are realized in the program. 5. Algorithm of learning I Let S(CI*) and S(CO*) be a group of parts whose RELATION is CIN and COUT, respectively. And S(C), S(C*), S(CIN), S(COUT) are defined as follows. S(C) = S(CI*) L1 S(CO*), S(O) = S(CI*) f-l S(CO*) + S(CI*) - S(C*), i S(COUT)’ = S(CO*) - Then a strategy for finding a candidate part of object part described by model is the following. Try the following procedure by setting S to S(COUT), S(CIN), S(C*), S(IN) in this order. Let a region including parts. OIIRO iK0 $VAL BOGLE CLASS SVAL INSTANCE SUBPART SVAL J-BODY IMAGE SVAL () 0 H 0 1 SEX SVAL MAN REASON $VAL GIVEN J-BODY AK0 SVAL BODY CLASS SVAL INSTANCE FIGURE PART OJIRO RELATION IN POSITION ((**I **I SHAPE SVAL REGION SUBPART SVAL (J-MOUTH J-EYE .I-HAIR J--HAND) COLOR SVAL WHITE J-HOUTH AK0 $VAL MOUTH CLASS SVAL INSTANCE FIGURE PART J-BODY RELATION POSITION t:c, C) SHAPE $VAL REGION SUBPART J-LIP COLOR PINK J-HAIR AK0 SVAL HAIR CLASS SVAL INSTANCE FIGURE PART J-BODY RELATION COUT POSITION ((Cl U) SHAPE SVAL BRANCH SUBBRANCH SVAL (II1 NIL H2INIL H3 NIL) COLOR NUMBER SVAL 3VAL BLACK THREE J-EYE AK0 SVAL EYE CLASS SVAL INSTANCE FIGURE PART J-BODY RELATION I?I POSITION ((*I U) SHAPE $VAL REGION SUBPART SVAL COLOR 3VAL !i.j;EEYE J-L-EYE) NUMBER $VAL TWO CONCEPT SVAL Fig.2 A frame of OJIRO clpied from that of OBAQ. in S be L. Then try (1) at first by finding out elements to which the case (1) is applicable. Next try (2), and then try (3) (4), (5) the case (1) (see Fig.3) in this order in the same way to (1) one-to-one correspondence case: Unless this correspondence is denied by a teacher, it is accepted and delete x from S and M, delete y from 0, where x and y is the part shown in Fig. 3, respectively and M, 0 means the set of model parts and object parts. If denied, this pair is recorded in NPL(Not Pair List), and put x to the last of S in order to test it again in (5). (2) one-to-many or many-to-one case: Unless one reliable correspondence between x and y can be found, postpone the decision of x, and put it into PLIST. (3) many-to-many case: By utilizing relational constrains on their locations among them; select a consistent combination of correspondences of them. If some are left unmatched, they must be put to the last of S. (4) no correspondence: put x to the last of S. (1: F...” (3) Fig.3 Correspondence between model and object. 363 (5) After all- above cases are .tested, extend a scope for the search of plausible parts into a region including current neighbour region. When no extension is allowed and PLIST ll M # 0 then ask the teacher what each element in PLIST fl M is. In the above process, when some correspondences recorded in NPL are detected later, it is clear that they are not regarded as candidates. If some parts of .the model or I the object are left unknown even after these processes, system must ask them of the teacher. 6. Picture Processing It is necessary to find regions in the given image which seem to correspond to the model parts. This is independently done from the identification process of parts; that is, for the given image an extraction process of lines and regions is conducted. At 1 first, all line segments are found and next all i regions enclosed by some of these line segments are detected. Then relations such as (C)IN/(C)OUT are computed for these, regions. Using a result of this preprocessing of the1 image, a rough search of the object is tried in a top-down fashion which looks for a group of regions that satisfies the contiguous relation (that is CIN or COUT). Let call it as a rough model of the object. For example, the program will seek for the three regions, because a top level contiguous relations holds among Q-BODY, Q-L-HAND and 8 -R-HAND. If this method fails to find a candidate or the object to be identified, the object detection will be attempted by removing one of the contiguous regions to the main part of model, say the Q-L-HAND. 7. Dialog c: what is oJiro? T: OJIRO IS OBAQ’ 5 BROTBER. C: Does oJ iro resemble obnq? T: YES. EUT OJIRO HAS JUST ONE HAIR. (a) C: A color of his. body is not white. It is yellow. Is it true ? (note1 that a color of OJIRO’S BODY is yellow) (bl T: YPLS. C: I have found his body. I have found his hair. 1 have found his left hand. I have found his right / hand. (cl I think I could find his mouth, but it is different from his model. T: WHAT AFtE YOU D5INW C: I have found a red branch to the left of his right. hand as a candidate of his mouth. But it is different from his mode 1. Do you need another explanation of the locat ion of his mouth ? (d) T: YES. C: Do you wont an explanation in absolute coord ina te ? T: NO. C: It is to the right of his left hand in the image. Do you want another expla- nation in absolute coordinate? T: YES. C: It is in the center of the middle part of his body in the image. T: EXPLAIN THE DIFFERENCE BETWEEN THE PiODEL AND OBJECT. C: On what part do you want an explanation? T: PIOUTH. C: The mouth of the model is different from that of the model in the points of shape, subpart and color. Do yowl need more explsna t ion? T: YES. C: The shape of the model is a pink region, and that of the obJect is a red branch. The mouth of the mode1 has a 1 ip, but that of the obJect has not subpart. IT: OJIRO’S MOUTH IS A RED LINE. (e) C: I have found his mouth. I think I could find his right eye, but it is different from his model. 3’: WHAT ARE YOU DOING? C: I have found a black region in the center of the middle part of hisr body as a candidate of his right eye. But it 3s different from his model. T: EXPLAIN OJIRO’S RIGHT EYE. C: Vlnich do you mean the model or obJect? T: PIODEL. 1 C: It is a white region which is above his mouth in the mode 1. Do you need ano- ther explanation of his mouth? T: NO. T: DOE3 IT HAVE SUBPARTS? C: Yes, his right eye has a pupil in the model. T: OJIRQ’S RIGHT EYE IS A BLACK REGION. C: I have found his right eye. 1,have found his left eye. Finally, I hove found oJiro. T: EXPLAIN OJItiQ. C: Re is a kind of bogle. T: EXPLAIN OJIRO’S STRUCTURE. C: OJirO is a ye1 low region and his body has a hair, a mouth, eyes and hand.s. 8. Examples We would like to explaine how the dialog shown in the above has been obtained. By (a) the program copies OBAQ model and constructs a frame tree consisting from a BOGLE frame and the OBAQ frame is stored as an instance of the BOGLE frame. But it records in its STM that there is just one hair as the teacher said so. In this case, a serious ‘problem will occurs if C does not know what a hair is, however C can recognize what a hair means, as it is stored as a part of OBAQ (At present, a verbal definition of a new object is not considered). Next, C looks for a ,candidate region of OJIRO .using the copied model. As mentioned previously, it tries to find the rough model from the second frame of Fig.1. In this frame, OBAQ, TABLE, APPLE, CLOCK and OJIRO are drawn, but as the first three objects have been found in the first frame, C tries to find them before a detection of OJIRO. Then there is a possibility that C will looking for CLOCK and OJIRO as a candidate of OJIRO. In this case, CLOCK cahnot clearly be matched to the rough model, therefore C succeeds in the detection of plausible regions of OJIRO. But regrettably a color of the region (yellow) which seems to be OJIRO’S body(J-BODY) being not different from that of the model(white), C cannot believe its tentative conclusion This causes a complain shown in (b) and by accepting T’s agreement C can believe its correctness and T can also think c to be in a right state. (Here, there is another problem about how a contradiction should be resolved when T’s belief does not agree with that of C.) Consequently, C changes value of COLOR in, J-BODY into YELLOW. Next, C tries a verification of J-HAIR which is the first y;LzD;f S(COUT), where. S(COUT) = (J-HAIR, As C can be aware of the fact that J-HAIR is a hair by its AK0 slot and that there is a note on the hair in STM, it can know that OJIRO’S hair cannot be recognized only by referring to the copied model. Since the just one alteration in the number of hairs is recorded there, C thinks their Ilocation to be same as the model specification. As J-HAIR has one-to-one correspondence with Jl (see Fig. 4), the; system believe this one as far as the teacher does not <deny it, which can be found in the ((C)U) part of J-BODY. It ends the verification of J-HAIR by storing (Jl NIL) into SUBB slot. In a similar way to this; C begins to identify J-HAND; however C can be aware of that it should look for J-R-HAND and J-L-HAND, as there is a CONCEPT slot in J-HAND,, which signifies that this frame is used not to represent graphical relations but to represent conceptual relations between frames. So C succeeds in the identification of them beause of a perfect .match in their locations,i colors ! and substructures. The result of this steps is reported in (c). Now there being no parts in the model which belong to S(COUT), elements in S(CIN), S(0) must be checked but there is none in them. Consequently, the identification process proceeds to S(IN) and C starts a verification of J-MOUTH, where S(IN) = (J-EYE, J-MOUTH). But as there is no possibility for the case (l), J-MOUTH and P4, P5, 52 have one-to-many correspondences. At this step it is impossible for the system to decide which one has the best correpondence with J-MOUTH, we get PLIST = (J-MUOTH). For J-EYE, which implies that J-R-EYE and J-L-EYE, there is no candidate in ((L)U), ((R)U) of the object. Consequently the scope of the search must be extended to its neighbour region. This leads the search process to the step for finding J-R-EYE in that scope shown in Fig. 5. As relational constraint between J-R-EYE and J-MOUTH does not contradict with that between P4 and 52, J-R-EYE-*P4, J-Mouth--J2 are obtained. Then other properties are tested for verification of its decision. But regrettably, discrepancies are found for both his mouth and eyes. The candidate for his mouth is a line segment, whereas the model says that it is a region and that it has a substructure. Similarly the candidate for his right eye is a black region, but its model description says that it is a white region with a substructure. C complains about their disagreements in the order of their discovery. Therefore it at first complains of his mouth as shown in (d). Given teacher’s instruction on a shape of mouth, C is convinced of his decision and add a new slot SUBBRANCH in place of SUBPART and records (52 NIL) there becase it has found that his mouth is not a region but a line segment. Here instead of the instruction (e), teacher can say that C should believe the given image correct. In that case, C suppose its decision to be right and does the same thing as the above. The difference between these two cases is the latter has a high risk in the correctness of its conclusion. Next, C complains about the discrepancies his eyes have. When this is resolved by a conversation, it is clear that J-L-EYE corresponds to P5. And now nothing is stated about his left-eye after an instruction on his right eye has been given to it, because they have the same properties concerning to both their models and object parts. In case where one of them is not same, a question is asked by C about that difference . Let consider the reverse case: learning of OBAQ from the model of OJIRO. At first Rl,R2 and R3 will become candidates for Q-BODY because inner regions are not considered as candidates. But it is easily seen that R2 is the best candidate for Q-BODY. Consequently Q-R-HANmtRl, Q-L-HAND--R2 will follow (w means a correspondence). Next, Kl, K2 and K3 will have candidacy for Q-HAIR but the system cannot decide which one is best, PLIST is set to ( f But this problem is not solved by extending t -HAIR). e scope, therefore it must ask the teach what they are. With this question it knows that OB4Q has three hairs. Now parts corresponding to s must be found in t -R-EYE, Q-L-EYE and Q-MOUTH e obiect, however it is auite difficult to do that by the given description of OJIRO alone; fnt all of them locate in ((C)C) of J-BODY and only one part R8 locates in the corresponding ((C)C) region of Fig.5 A scope for finding J-R-EYE, J-MOUTH. OBAQ Fig.4 Candidate region of OBAQ’i$ OJIRO. R2. If R8 is supposed to be Q-R(L)-EYE, their color, substructure contradict each other. IAnd it also difficult ----.-- -..-- to regard R8 as Q-MOUTH for Q-MOUTH is a branch but R8 is a region. As a result of this uncertainty, the scope must be extended. Then a result shown in Fig. 6 is obtained, Q-R-EYEa=R4, Q-MOUTH--R8 follow if relational constraints of them are known. Regrettably these are not computed from the descriptions recorded in the frame representation because the precise locational relation between Q-MOUTH and Q-R-EYE can not be obtained. But1 by tracing a pointer stored in IMAGE slot of the copied model of OBAQ, it is easy to get the locational relation between Q-R-EYE and Q-MOUTH and compute Q-R-EYEewR4, Q-MOUTH ~wR8. Howevers their color and substructures do not coincide each other, they must be asked. At last a correspondence between Q-L-HAND and R6 is easily obtained. r-L-4-l I ; 1 ci i llti I 4m .-&I -*r-Ma Fig.6 A scope for finding Q-R-EYE, Q-MOUTH. Now we would like to give another example to show the current capability of this system: Example is given in Fig.7. Let suppose we are given a model for (A) and we must teach the system to model (B). Through the same process shown’ above, ROOFle-R2 and *WALLl-rvRl are obtained in this order. Next R6 is hypothesized to correspond to DOOR1 due to its CIN relation and its location. As this is denied by the teacher( it is clear that it is not a door), NPL is set to ((DOOR1 R6)). Then the scope for DOOR1 is extended and this time R4 and R6 become candidates for it. By1 referring to NPL, one-to-one correspondence between DOOR1 and R4 is obtained, however, their locations. are different each other. This discrepancy must be resolved by instructions. As the consequence of this it can know that R4 is a door and that there is a possibilty the locational constraint on doors does not necessarily succeed, by comparing the location of R4 and DOORl. This is recorded in its memory and it should be used later after the system has experienced more examples about a class of these objects. The same thing is done for windows because the position R5, R6 do not coincide with that of WINDOW1 or R3 (R3 is found in the region predicted by WINDOW1 but R5 and R6 cannot be recognized until they are told). If the teacher explicitly says that the I RI . . . (A) Fig.7 Two houses. (Bl 365 locational constraint will not be valid for identifying doors or windows, this fact must be recorded into their frame descriptions. But this has not been implemented in the present system. As easily understood from ,the above example, the current representation has many weak points. One of them is that it cannot discriminate cases shown in Fig. 8. In this figure, P is contiguous to the side, while Q is contiguous to the bottom although their location is in ((L) D) of some part. If the system can discriminate them, it can easily infer that R4 will correspond to DOOR1 without regarding R6 as DOORl. 11 1: L (a) tb) Fig.8 P and Q. 9. Use of Generic Frames : As mentioned in 8, iOBAQ frame causes BOGLE frame to be generated as a generic one, and OJIRO frame is obtained through learning process. Naturally this generic frame should be able to represent all of its instances including of OJIRO frame, and common properties to them should be stored ,in suitable slots of a parent frame. At present our program just makes frame trees in which OJIRO and OBAQ frame are children of BOGLE. A reason for this is that there is a danger of global rearrangements of frame trees. In the first example, we at first believe OBAQ frame to1 be an instance frame but it may turn out that it is not an instance when other examples of OBAQ not matched to his frame appear in image, because there are many variations in his shape as he can wink or move his eyes or open his mouth. After program have experienced these example, it should make a general concept of OBAQ and arrange frame trees by erasing unnecessary instances about him. It is not sufficient to record such possibilities in forms of AND or OR combinations of descriptors because arbitrary combinations of variations in its substructures are not necessarily allowed. The rearrangement of frame trees is a difficult problem,’ considering the possibility man is apt to fail in giving correct instructions. Though there are some incomplete points in the construction of frame trees, program can use a portion of them to identify subparts of the object to be learned. For example, suppose that we would like to teach by referring to OJIRO a character Q-KO, who resembles to him very much except for her eyes and her eyes are rather similar to OBA ‘s. B In the course of identification of her, if 0 AQ frame is not stored, program will complain about he? eyes as well as in the learning of OJIRO from OBAQ. However it can use OBA s ‘s eyes in the recognition-of her eyes by tracing its A 0 link and finding OBAQ frame, after a failure in the matching of her eves to OJIRO’s. Of cause. it does not do tha: without ask for his approval. ieacher’s permission, but will analogies among things have been given to the system as the known fact, but they should be obtained by itself in the course of learning which needs more examples than that experimented in this paper. To accomplish our purposes we must implement many programs including language system which understands ellipsis , anahora and gives us good explanations on the structure of ,objects and the reason why such structural descriptions have been obtained. And if a fatal error is detected after in acquisition of some models, it must be corrected by considering the history in the construction of models. .This error correcting process has a close relation to ‘frame trees. References 1) N.Abe, I.Soga and S.Tsuji: A Plot Understanding System on Reference to both Image and Language, 7th-IJCAI, p.77 (1981) 2) N. Abe and S.Tsuji: A Learning of Object Structures by Verbalism, COLING-82, (1982) 3) P.H.Winston: Learning Structural Description from Examples, Ph.D.Th., MIT (1975) 4) P.H.Winston: Learning by Creating and Justifying Transfer Frames, Artif. Intell., 10,2, p.147 (1978) 5) P.H.Winston: Learning and Reasaning by Analogy, ;lAC$23, ‘2, p.689 (1980) Drettermh and R.S.Michalski: Inductive Learning of Structural Descriptions, Artificial Intelligence,16,~.257( 1981) 7) S.A.Vere: Inductive Learning of Relational Productions, Pattern- Directed Inference Systems (Academic Press, 1978) 10. Conclusion A new attempt toward a verbal modeling of objects has been shown in this paper, however there are many incomplete points concerning to the learning method taken in this research. The basic strategies for finding 366
1982
20
113
AN OVERVIEW OF @NIX David Barstow, Roger Dufley, Stephen Smoliar, Stanley Vestal Schlumberger-Doll Research Old Quarry Road Ridgefield, Connecticut 06877 ABSTRACT @NIX is an automatic programming system being developed for use by petroleum scientists who may not be knowledgable about computers. As a first step, a system has been implemented to assist in the trial-and- error process of developing new models and techniques for quantitative interpretation of well logs. The user interface exploits graphical techniques to enable petroleum scientists to describe their models in the natural concepts of the domain. The resulting specification can be implemented in any of several different target languages. The system is in active use by petroleum scientists, who find that it has significantly reduced the time to get feedback on hypothesized models. I INTRODUCTION @NIX is an automatic programming system whose development is guided by three basic principles: l @NIX users need not be knowledgable about com- puters or computer science. l @NIX must be able to produce prototype software quickly enough for the user to cycle many times through the trial-and-error loop of initial specification. l @NIX must be able to produce software that is sufficiently general, robust and efficient for routine use. We do not believe that t@e software development process is sufficiently well understood for general- purpose systems based on these principles to be built in * Oil well logs are made by lowering instruments (called tools) into the borehole and recording the measurements made by the tools as they are raised to the surface. The resulting logs are sequences of values indexed by depth. Logging tools measure a variety of basic petrophysical properties (e.g., the resistivity of the rock surrounding the borehole). Petroleum engineers, geophysicists and geologists are typically interested in other kinds of information which cannot (now) be measured directly (e.g., water saturation, the fraction of the rock’s pore space occupied by water as opposed to hydrocarbons). Loi3 interpretation is the process of computing the desired information from the measured information. the near future. In particular, we believe that the goal of generality has led traditional automatic programming research to ignore significant software development issues. Principal among these are: 0 software specification by users who are not knowledgeable about computer science l software complexity due to size rather than algo- rithm l software evolution as a fundamental characteristic of software use With the hope of addressing these issues, we have adopted an experimental approach in which we select particular “real-world” programming tasks and develop special-purpose systems for them, guided by the princi- ples given earlier. In the rest of this paper, we will try to convey our goals and approach by describing our first experimental system from a user’s viewpoint, followed by a brief discussion of specific issues we plan to address in future experiments. II qbo, AN INITIAL EXPERIMENT For our first experiment, we have chosen the prob- lem of writing prototype software to support the trial- and-error process with which petroleum scientists develop models and techniques for interpreting the petrophysical data measured by logging tools.* Quantita- tive log interpretation is usually done on the basis of models which are statements of relationships between the measured information and the desired information. Many such models can be described mathematically as equations in which the terms denote the log readings or the desired numeric quantities. The models themselves are the result of a variety of concrete experiments and theoretical investigations. In developing our initial system, called 40, we have had two primary concerns: l Since the user (a log analyst) may be neither experi- enced in nor comfortable with traditional computer interfaces, it was important for $0 to “speak the user’s language”. Most importantly, we did not want to require the user to learn some kind of command From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. 0 language. Since the user’s data exist on a variety of computing environments with different computational abilities, it was important for 40 to be able to implement the user’s model in a variety of target languages. How- ever, the user should be required, at most, to simply indicate the target environment, with 40 handling all of the details. Perhaps the best way to describe 40 is from the per- spective of the user, a log analyst not necessarily knowledgable about computers or computer science. In effect, $0 provides him/her with a “model editor”: facili- ties for developing and modifying the different aspects of an interpretation model. Each of these facilities includes high level operations, expressed in the natural concepts of the domain. The attached picture shows the 40 user interface** during the process of developing a particular interpreta- tion model (an unrealistically simple model has been chosen for the sake of this presentation). Different win- dows are used to show different aspects of the model. The window entitled “Geological Model: CORI” con- ** & currently runs on a Xerox 1100 Scientific Information Processor which includes a 1024 by 808 black and white pixel screen. C#Q is written in Interlisp-D, a Lisp dialect with a variety of primitives for dealing with bitmaps, windows, menus, and a “mouse” pointing device. tains a diagramatic description of the universe which has been assumed for this particular model: the rock around the well consists of two types of solid (calcite and dolomite) and two types of fluid (water and oil). A geological model corresponds to certain equations which constitute part of the overall interpretation model. In this case, the equations are: 1 .o= v,,,+ V,, where V, represents the volumetric fraction of material X. By stating this assumption, the user has laid the foundations for the other aspects of an interpretation model. In order to allow the user to develop alternative geological models, $0 includes a graphics-oriented tree editor. For example, to indicate that there are two types of hydrocarbons present, the user could point (using the mouse) to the OIL node and replace it (through selec- tions from a command menu) with a HYDROCARBON node whose children are OIL and GAS nodes. The window entitled “Tool Response Model: BULKDENSITY” shows the equation which relates the measured tool response (pb) to the responses the tool would make in the presence of 100% concentrations of the materials which make up the geological model (p, denotes the response of the tool to material x). A table of responses to particular minerals is also shown. 40 includes an editor for tool response models; for exam- Tml raspoma modebz (EUKCENSITY MIUTROFl solllc) Eqwth systems: (COFUMIATER-SATURATI) 368 ple, one editing activity is to add or change the response of a tool in a particular mineral. Geological models and tool response models are two of the most common ways that log analysts use to describe the equations of an interpretation model. How- ever, not all equations are appropriately described using such techniques. For such cases, $0 provides an equa- tion editor. The window entitled “Equation System: CORI/WATER-SATURATION” illustrates the editor being used to describe a water saturation equation for use in conjunction with the geological model given ear- lier. The editing commands are generally oriented to particular subexpressions of the equations. For exam- ple, two subexpressions may be swapped by pointing to each with the mouse after selecting the “swap” command from the command window. The window entitled “Computational Model: CORI” shows how the user may combine together the different aspects of an interpretation model to give a complete model. In this case, the user has selected a single geo- logical model, three tool response models, and the water saturation equations. By putting these pieces together, the user has, in effect, specified a model consisting of eight equations in eight unknowns. The typical use of such a model is to compute, for each depth in a well, the relative concentrations of the minerals of the geo- logical model, given the tool readings at that depth. This use is described in the rest of the computational model window: the input and output logs are the inputs and outputs for each depth; parameters are values the user has chosen for run-time specification; constants are values which $10 has stored in its knowledge base about log interpretation; locals are terms which appear in the equations but which the user has chosen not to see as outputs. At this point, the user has finished his/her part of the job; the process of implementing the model as a program in a particular target language is “merely” an exercise in mathematical manipulation and programming and is left to 40 to complete.*** III DISCUSSION $0 is now in active use by petroleum scientists. They have found the rapid feedback to be of great value in their interpretation development activities. Perhaps more importantly, however, they have found that they can concentrate their energies much more directly on the concepts of log interpretation rather than being overly concerned with figuring out the right way to *** Of course, the process of enabling 40 to do this exercise was a major effort, involving the codification of several kinds of programming knowledge, ranging from techniques for algebraic manipulation to the syntax of three significantly different target languages (LISP, FORTRAN, PROSE). The details of how C#BO works internally are available elsewhere 111 express such concepts in traditional programming terms. Thus, 40 seems to have addressed our first two prin- ciples quite well. By focusing on pure equational models, however, we have avoided the third principle: the ability to produce software that is sufficiently gen- eral, robust and efficient for routine use. Future work on @NIX will address this problem directly. This will require research in three general directions: Specification: the development of convenient facili- ties for expressing the real-world considerations that make software much more complex than simple equational models. We are currently exploring the use of an object-oriented specification technique as a way of dealing with such complexities; the initial results look promising [2]. Programming knowledge: @NIX will clearly require access to several types of programming knowledge in addition to the algebraic manipulation knowledge contained in 40. Chief among these are knowledge about certain aspects of numerical analysis and knowledge about the efficiency trade-offs among different programming techniques. Perhaps the most interesting question concerns the degree to which knowledge about log interpretation itself will be required during synthesis: is the information con- tained in the specification sufficient for synthesis, or must the synthesizer understand the domain to a greater degree in order to decide among implementa- tion alternatives? System development repository: During the synthesis process, @NIX will be making a large number of decisions. Both to validate the target code and to facilitate evolution of the code to accomodate changes in the specification, it will be necessary to keep detailed records about the decisions and the motivation for them. We are hopeful that addressing these issues will help automatic programming research to have a significant impact of the real world on software engineering. m PI REFERENCES Barstow, D., Duffey, R., Smoliar, S., Vestal, S. “An automatic programming system to support an experimental science”, Sixth International Confer- ence on Software Engineering, Tokyo, Japan, Sept. 1982. Smoliar, S. “Specifying logic and control: an object-oriented approach.” (in preparation).
1982
21
114
ARBY: Diagnosis with Shallow Causal Uodels Drew McDermott Yale University Department of Computer Science Ruven Brooks ITT Integrated Systems Center ABSTRACT Arby is a software for writing expert electronic systems system or higher order language systems to do diagnosis in . As such, it is similar to EMYCIN (van &lle 1962) in application, but quite different in design. It is rule-based to an extent, but the rules are written in predicate calculus. It resembles Caduceus (Pople 1977) in its me&a- nisms for refining and combining hypotheses. 1. Ovemiew Arby is a software system or higher order language for writing expert systems to do diagnosis. As such, it is similar to EMYCIN (Van Melle 1962) in applica- tion, but quite different in design. It is rule-based to an extent, but the rules are written in predicate cal- culus. It resembles Caduceus (Pople 1977) in its mechanisms for refining and combining hypotheses. Unlike both of those systems, however, Arby is designed for finding faulty modules in large elec- tronic systems. (Davis 19Sl) The system we have been studying so far is the Microwave Stimulus Interface (designed by the General Dynamics Electronics Divi- sion). This system is itself used to test avionics equipment. It provides microwave signals modulated in certain standard ways. A (simplified) block diagram appears in Figure 1. software. So he is presented with findings and eui- dence including the output of the low-level diagnos- tics. His job is to find which module (detector, pulse modulator, any of several switches, etc.) is faulty. He can set switches to any position, probe with an oscil- loscope, break the connection at certain points, and stimulate various inputs with standard signals. How- ever, these actions vary in time expended and infor- mation gained. This domain is distinguished by the following two features: l Reasoning with shallow models The problem of diagnosing faults in electronic circuits is in general quite difficult. (deKleer 1976, Brown 1975) This is because electronic circuits are hard to break into separable modules. Explaining why such a circuit is malfunctioning requires understanding a causal model of it. How- ever, above the level of individual circuits, the models get simpler. They usually consist of a flow of something (like a signal) through nodes and deu- ices. The most sophisticated system to under- stand is a negative-feedback loop. On the other hand, things are more complex than in medical diagnosis, where multilevel causal models are quite rare. l Information acquisition strategies In diagnosis, the full set of symptoms is not usu- ally available at the time diagnostic reasoning is OUTPUT 1 .OUTPUT 2 m 1. MS1 block diagram - done but must be acquired during the reasoning process; thus, part of the reasoning process must be to determined which tests are to be run or which observations are to be made. The mechan- ism for doing this must take into account the cost - whether financial, time or risk to the patient - of acquiring the additional information. The MS1 consists of two main stages. The first amplitude-modulates the output of an RF synthesizer using a feedback through a detector. The second introduces pulse modulation Ordinarily, the diagnos- tician is presented with a faulty MS1 which has already been run through some automatic diagnosis 370 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. 2 How Arby Works Arby consists of two major modules, an inference system, I-IWO, which reasons about symptoms and hypotheses and a human interface system, IFK, which obtains information from the user. Both modules of Arby are written in Franz Lisp, and use the DUCK gen- eral purpose retriever. DUCK is a descendant of Con- niver (Sussman and McDermott 1972), and may be thought of as a more powerful version of Prolog. profitable, that is, the one that will apparently pro- duce the biggest impact on the evidence totals for the leading candidates at the least cost. (This is done with the Interaction Frame Manager, described below.) If no test looks like it is worth running, then HYPO is not lost. In addition to favoring hypotheses with the highest evidence totals, it also favors those that account for as many findings as possible. (Pople 1977) A IFS DATA BASE -EXPERT Figure 2. At-by block diagram 2 1 The Hypothesis Generator The hypothesis generator (HYPO) is responsible for creating and refining hypotheses to account for findings. It has two main submodules for doing this: the candidate hypothesis finder, and the evidence sifter. Each of these relies on predicate-calculus deductions on a data base of rules. Findings and hypotheses are (in principle) arbi- trary formulas about entities in the domain. A flnding is a fact that needs to be explained, and a hypothesis is a fact that would explain it. In order to find hypotheses, it uses the backward-chaining facil- ity of DUCK to find solutions to the goal: (accounts-for ?hypo <finding> <given>) (The third argument will be explained later.) The solutions come back in the form of possible values for ?hypo, plus justifications, rules and facts which together support this explanation. (Doyle 1980) It is quite common for TYPO to fLnd more than one hypothesis that could account for a given finding. To handle this, it enters a special choice protocol (McDermott 1978) in which it attempts to fmd evi- dence for and against each candidate, That is, it attempts to deduce formulas of the form (evidence <hypothesis> ?amt) for each candidate hypothesis. It totals the evidence, and the candidates with totals significantly higher than the others are retained. This strategy as described has one failing: it assumes that all the evidence has already been gath- ered. In some domains (e.g., that of MSCIN), this is a reasonable assumption, In the domain of simple elec- tronic reasoning, there are many tests that could be run. It would be silly to run them all before the con- sultation begins. Instead, the system doesn’t run any until it is faced with a choice situation that requires more information to separate the two leading candi- dates. Then it runs whatever test seems most After producing the composite hypothesis with the best overall evidence, Arby does not quit, but proceeds to refine each element of the hypothesis. This means finding a more detailed explanation of each tiding. For instance, the first round might localize the failure in module33. The second round might then localize it to a device within that module, by solving the deductive goal: (accounts-for ?hypo <finding> (fault-in module33)) The advantage of doing things this way is that only a small number of alternatives need to be considered at each level. When one is rejected, all of its reLfi.nements are swept away without ever being con- sidered. Furthermore, this organization nicely reflects the hierarchical structure of many electronic sys terns. 22 Interaction Frames The basic structure which controls the user interaction is organized is an interaction manager, IM, and a set of interaction frames, IFS. -An IF is a discrete unit of interaction u-ith the user which, when it terminates, results in the addition of assertions to the database. A very simple IF might consist of asking the user a multiple choice question and, depending on the answer, placing just a single assertion into the database. A more complex IF might instruct the user step by step to run some complicated equipment test and then place several different assertions into the database to record the results of the test. An important property of interac- tion frames is that the assertions they may add are unconstrained; this is in contrast to the MYCIN struc- ture in which an interaction may only change the values of variables. Interaction frames are invoked because the HYPO component needs to retrieve information that is not yet in the database but which could be obtained by asking the user. This is implemented by asserting rules of the form: (<- <form to be deduced> (QIF’ <interaction frame> <arguments>)) 371 If the deduction can not be made any other way, this causes the interaction frame to be placed on a list of candidate frames to be invoked. Using the cost cri- terion described earlier, this list is ordered to iden- tify the most “profitable” frame to run. In general, however, just asking for the informa- tion in the order determined by the inference system will not result in an acceptable order of questions from the user’s standpoint: The questions might have logical prerequisites which are not yet satisfied or the questions might not occur in the order to which the user is accustomed. To insure the satisfaction of these constraints, the Interaction Frame manager requires that an IF be explicitly enabled before it can be invoked. In the database, this is indicated by assertions of the form: (want <IF> <arguments>) To insure, for example, that the status of switch 17 is asked about only after it has been determined that the level of amplitude modulation is high, rules of the following form could be written: (<- (want IF23 17) (modulation-level high)) There are also assertions for indicating that an IF has already been run and for indicating that an IF may be run whenever needed. This co,mbination gives the Interaction Manager considerable pGi%eI- in reasrjning about and optimizing the structure of interactions with the users. 23 TheArbyEkwironment A system like Arby is, we have discovered, one-third thinker and two-thirds explainer of its thoughts. Although the set of explanation tools is still rapidly evolving, we have found it necessary to provide tools for the following tasks: l Editing of rules and IFS by the domain expert. . Explanations of why a question is being asked (in terms of the evidence it might provide) . Explanations of why a given fact is currently believed. 3. Example Version 0 of Arby has been written, and coding is complete for an intial set of rrles and interaction frames for the MSI. We anticipate the usual redesign as we attempt assimilation of expert know-ledge, but the basic design seems sound. As an example of the operation of Arby, we present the following description of current system function- ing: 1. When Arby is started, a standard IF is always run to ask what the output of the low-level diagnostics was. Let’s say the output was “persistent loss of power at low frequencies.” 2. Arby notes this as a finding to be accounted for. 3. “Accounts-for” rules are used to propose candi- date explanations I In this case, the problem could be in the low-frequency modulation loop. This is the only candidate, so it is accepted. 4. Next Arby attempts to refine the hypothesis. “Accounts-for” rules suggest any of several com- ponents in the low-frequency modulation loop. “Evidence” rules are sought to propose evidence one way or another. There is a rule of the form “If after cutting the loop before the synthesizer, measurement of a node shows it to be saturated the wrong way, then there is good evidence in favor of localizing the problem in some device before the node else there is good evidence in favor of localizing the problem in some device after the node.” 5. This rule has no immediate effect, because it refers to a real-world action that may not be worth performing. This action is queued, and, assuming no decision can be reached without it, it is handed to IFM. IFM then uses IFS to instruct the user how to cut the loop and what to measure 111 PI PI Bl B-1 PI PI Bl References Johan deKleer 1976 Local methods for local- izing faults in electronic circuits. MIT A.I. Lab TR394 Jon Doyle 1980 Truth Maintenance in A~tifcd Intelligence 12 pp. 231-272. Allen Brown 1975 Analytical knowledge, causal reasoning and the localization of failures MIT A. I. Lab. TR394. Randall Davis 198 1 Expert systems -- where are we and where do we go from here? Invited talk at IJCAT 7 Drew McDermott 1978 Planning and acting. Cognitive Science 2, no. 2, p. 71 H. Pople 1977 The Formation of Composite Hypotheses in Diagnostic Problem Solving, IJCAT 5. Gerald J. Sussman and Drew McDermott 1972 From PL4NNER to Conniver -- a genetic approach. Proc. FJCC41, p. 1171 Van Melle, W. 1980 A domain independent system that aids in the construction of knowledge based consultation programs. Heuristic Programming Fhject, Depart- ment of Computer Science, Stanford thiversit~, HPP 80-I 1. 372
1982
22
115
A system is described which produ ces explana- animation sequences for a smal 1 CAD system. ABSTRACT GRAPHICAL ANIMATION FR(M KNOWLEDGE Daniel Neiman Dept. of Electrical Engineering and Computer Science The University of Connecticut Storrs, CT 06268 The- animation is -data-driven from a scriptal knowledge structure describing the user-systa in- teraction for a given design operation. The use of a unified data representation scheme results in the potential to generate animation in coordination with natural language output from an existing knowledge-based system. I INTRODUCTION This paper describes GAK, a system for produc- ing Graphical Animation from Knowledge as a part of the explanations generated by the CADHELP syster:. CADHELP is a user-friendly CAD systari which pro- vides basic logic design functions and produces na- tural language descriptions of these features aimed at the naive user [CULLS2]. The natural language explanations are generated from the CADHELP data- base which describes in detail the user-system in- teraction for each systg;l feature. The GAK system supplements these explanations with demonstrative animation sequences driven from the same system- global knowledge structure. For example , when the CADHELP system explains system fea ture DRAG, which allows the user to the move a dev sentence ice across the design, it generates the "Move the cursor to the device you want to drag." The initial state of the animation shows the graph- ics display screen with a logic design, the graph- ics tablet, and the hand of the user grasping the stylus. The hand moves the stylus over the graph- ics tablet. At the same time, the animator shows the cursor moving on the screen. When the cursor encounters the device to be dragged, the system then generates the next step in the DRAG feature. of a The above example demonstrates the advantages knowledge driven animation SyS tan. The animat- ----___- This work was supported in part by the Advanced Research Projects Agency of the Department of De- fense and monitored by the Office of Naval Research under Contract N00014-79-C-0976. cd demonstration corresponds to the natural language explanation and is generated from the same conceptualization. The complex causal relationship which exists between the location of the stylus on the tablet and the location of the cursor on the graphics screen is made clearly visible to the user. This paper will outline the rationale for graphical animation from knowledge, the CADHELP system knowledge structures, and the requirements of a knowledge-driven graphics system. This will be followed by a discussion of the implementation of the GAK system. The issues demonstrated by the successful im- plementation of the GAK system follow: 1. Knowledge-driven animation is possible and can be implemented in a modular fashion. By de- fining a limited set of primitive graphical actions, a wide number of animated sequences can be generated. 2. The animation process can be driven from the same knowledge structure used for generating natural language explanations. This allows an additional output modality for knowledge-based systems which can be coordinated with natural language utterances. From a theoretical standpoint, the use of the knowledge structure to generate multi-modal output demonstrates the generality of the knowledge representation techniques employed. 3. The animation produced is very flexible. As the system changes, only the description of the system needs to be modified. As new features are added to the system, correspond- ing animation can be generated with little or no additional work. II REPRESENTATION OF KNOWLEDGE - The natural language explanations of CADHELP are generated from feature scripts [21 which describe in detail the interactions between the user and the CAD system. These feature scripts consist of Conceptual Dependency (CD) actions and states [31 causally connected by the primitives of 373 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. the Commonsense Algorithm 141. These scripts are created by expert users in the domain of the CAD systan after careful examination of the physical processes involved in the performance of the logic 'design features. The actions and states known to the system are the standard CD primitives applicable to the CAD domain (PTRANS, PROPEL, MTRANS, ACONFIG, PCCNFIG, etc . ..) plus a small set of system specific actions (SDRAW, SMAKE~BAP, SMOVE, SPRESS). It is the actions of the feature script which are explicitly animated, as it is only actions which can cause visible changes to the state of the world. The animation process is data driven with a corresponding animation "expert" for every known action. The states contained in the feature script serve two purposes: they update the world model of the animation system, and they serve as predicates for conditional traversals of causal chains. The causal links between the concepts in the feature script are those of the Commonsense Algo- rithm. The simplest links (one-shot-enables, causal-state-coupling, etc.) are represented as binary assertions of the causality between adjacent actions and states, ie. drive the animation process. GAK also contains an internal knowledge structure describing the primi- tive entities of the CAD microworld. These enti- ties are the hand of the user, the graphics display, the stylus and tablet used for graphical input, and the various devices and attention- focusing entities which can be drawn on the screen. Each of these entities is described by a CD-like frame which contains the physical and graphical at- tributes of the entity. The representation below describes the version of the user's hand which is used to grasp pen-like objects. Comments are indi- cated by the tilde ("-'I). (dv hcon2 (hand functgrasp) -this flavor of hand grasps represents 'khand -real entity is a hand grspobj *stylus -stylus is exemplar grasps(ni1) intens(15) -graphic attrs. posx(O.0) posy(300.0) scale(4.0) orientttrns rot(O.O) rotx(160.0) roty(35.0)) segno(ni1) fncprts(h2gpnt2) hasparts(hbody2 thmb2 ffngr2 ifngr2 rfngr2 lfngr2)> The functional part is a particular point of in- terest on the given entity, in this case, it is the part of the hand which supports the stylus. (causal-link concept concept) <dv hZgpnt2 Pranches in the flow of execution are represented (grspnt a decision construct known as a "turning point" partof hcon2 bY 151. alternate causal functionality(support objchandle prox(tip)) This construct contains functgrasp)) sx(170.0) sy(-30.0))> .paths associated with decision criteria represented as states. If a state is true according to the "The body of the hand. current state of the world, then the corresponding (dv hbody2 causal chain will be traversed. (hbody partof hcon2 config h2lnsl (trnpt ~~(0.0) ~~(0.0) dx(-1.0) dy(-57.0) )) <st ative-cond it ion The graphics instructions used to draw the body of ((causal chain))) -the hand are stored in the KS. . . . . : : . . : (dv h2lnsl <default-condition ((default causal-chain))>) (conf ig shape (amorph) -no particular shape lines((L 78.0 2.O)(L 20.0 3.0) (L 47.0 24.0) (L 2.0 0.0) (L 2.0 -l.O)(M -16.0 -65.0) Repeat-until-threshold (RUT) links are represented in a similar way. The RUT construct contains a causal chain to be traversed repeatedly until a termination state becomes true. (L -2.0 -l.O)(L -16.0 -4.0) (L -20.0 2.0) (L -19.0 2.0) (L -77.0 -19.0) ) )) (rut termination-state ((causal-chain))) -the representation for the rest of the hand -is omitted . . . . Each of these causals is considered a link. Causal chains are constructed as lists of links. A typical causal chain will have the form (linear (In1 ln2 . . . . lni)) where the prefix linear.indicates that parallelism is not present in the script. The GAK system can- not animate events occurring in parallel. str The previous discussion has focused on the ucture of the scriptal knowledge required to The ability to draw upon knowledge to make correct inferences is a necessary feature of an in- telligent system [6]. GAK is able to correctly infer instances of causality not contained expli- citly in the feature script and produce the ap- propriate animation. The rules necessary for in- stantiating omitted causal relationships are con- tained within the routines responsible for animat- ing the system actions. For example, if the user is grasping the stylus and moves his hand, then the stylus must then move with the hand. In the PTRANS routine, which is responsible for animating move- ments of objects, there exists (among others) a 374 rule which States that if a physical object is moved, then any entity attached to that object will also move. The representation and specification of loca- tions is a serious issue in graphical animation. The animation routines require x, y coordinate pairs, a degree of specification not provided by the scriptal concepts. Locations are also represented by CD-type frames and have two types, coordinate pairs and “fuzzy locations”. A coordinate pair representation describes the x,y point on the screen and the entity in whose frame of reference that point lies, e.g., the starting point for a signal-path can be represented as: <dv &start-term (coords posx(102.0) posy(804.0) rel corresp(ni1) partof scrncon rolecdstart-term))> A fuzzy location, (cf. McDermott [71), is used to represent areas rather than points. The representation describes a box and a series of fuz- z y specifications. Any points which are contained within the box and do not violate the fuzzy specif- ications are considered accept able. The represen- tation for a fuzzy area located at the lower right- hand corner of the drawing area is shown below. (dv *rhlctab (fuzzyloc partof tabcon maxxbndcedge deftypcfunc) espec(right) obj darea) uaxybnd(fraction deftyptfunc) fracspec(0.25) start (edge deftypcfunc) espec(bottom) obj darea)) minxbndtfraction deftypcfunc) fracspec(0.75) start (edge deftypcfunc) especcleft) obj darea)) minybndcedge deftypcfunc) espec(bottcm) obj darea) fuzzspec(ni1))) In addition to the explicit representations of lo- cations, the location “expert ” possesses a set of inference rules which are used to disambiguate lo- cationa 1 references not clearly specified by the script . III THE ANIMATION PROCESS The GAK implementation described is written in Franz Lisp and runs on a VAX 11/780. The graphics device is a Digital Equipment VT-11 vector display. The speed of the animation generation is less than real time due the distributed nature of the and the use of interpretive code. system The process of producing animation is straightforward, given the knowledge structures previously described. Tile GAK system is a service expert to the CADHELP explainer. To initiate an animation sequence, the explainer passes the ap- propriate script to the animator and then requests that the animation be performed. When such a re- quest is received, GAK initializes the display to an initial state and traverses the causa 1 chain. The animation is data-driven, i.e., as animatable concepts are encountered, the corresponding anima- tion functions are invoked. If the explainer has generated a natural language explanation of a given concept, then text is displayed as the concept is animated. A demonstration of the CAD system consists of a sequential performance of actions on the part of the user followed by the system’s responses. The animation which simulates this demonstration takes place as a result of encountering these actions in the course of traversing the causal chain of the feature script. As each action is encountered, a graphics routine specific to that action is invoked to produce the desired graphical sequence. When for example, the system encounters the con- cept, (ptrans actor *user obj *stylus to *rhlctab) “move the stylus to the right-hand-lower-corner of the drawing area” the concept is passed to the PTRANS “expert”. This routine first determines the current location of the stylus. Because it is typically the business end, or tip, of the stylus which must be moved to a given location, the location of the stylus defaults to the location of the tip of the stylus. The des- tination given in the “to” slot of the PTRANS is the lower-righthand-corner of the drawing area on the tablet . This concept is represented by the fuzzy location shown previously. The location mechanism takes as input the fuzzy conceptualiza- tion and returns a random point within the area \:hich it defines. It is to this point on the screen that the tip of the stylus will be moved. The inference rules associated with the PTRANS con- cept are then checked. As it is the user who is the actor of the PTRANS, the inference is made that it must be the hand of the user that causes the stylus to move. Graphics routines are then called into play which actually cause the images of the hand and the stylus to move across the screen. IV CONCLUSIONS The GAK system was able to produce accurate demonstrations of system features given as input feature scripts describing the CAD system. The feature scripts required some modif ications from the or igina 1 CADHELP database , however, these modif ications did not affect the system’s ability to generate natural language explanations. GAIT therefore demonstrated that a single scriptal knowledge structure describing a complex physical system can be used to not only describe that sys- tem, but also to control a process which simulates the system in operation. In addition, the Conceptual Dependency representation of the actions and states of the scripts proved to be a flexible and useful tool for storing the required control information for the animation, while the CD-frames also proved useful for maintaining the entity database of graphical information and representing locations. In conclusion, GAK demonstrates the flexiblity of a system which utilizes a unified rcpresentztion scheme. The knowledge structures of the host . sys- tern, CADHELP, proved completely suitable for an ap- plication in the domain of graphical animation from knowledge. REFERENCES [ll Cullingford, R.E., Krueger, M.W., Selfridge, M.G., and Bienkowski, M.A. "Autamated Expla- nations as a Component of a Computer-Aided Design System." IEEE Trans SM6C SMC-12:2 -0 --a (1982) 168-181. [21 Cullingford, R.E., Krueger, M.W., Selfridge, M and Bienkowski, M. Exilanations." "Towards Autanating Proc. 7th International Joint Conference on Artificial Intelligence, Van- couver, B.C,-i981. [31 Schank R. (ed.) Conceptual Information Process- ing, North Holland, Amsterdam, 1975. [kl Rieger, C. "The Commonsense Algorithm as a Basis for Computer Models of Human Memory, Inference, Belief and Contextual Language Comprehension", Proc. TINLAP Workshop, M. I. T ., 1975. [51 Bellavance, D. A. "An Interpretive Control Model for Multimodule Systems Containing Ex- pert Knowledge", Master's Thesis, Dept. of Electrical Engineering and Computer Science, University of Connecticut, Storrs, CT, 1981. [6] Charniak, E. "Toward a Model of Children's Sto- ry Comprehension", MIT AI Laboratory Technical Report 266, Cambridge, MA, 1972. 171 McDermott, D. "Finding Objects with Given Spa- tial Properties", Research Report #195, Dept. of Computer Science, Yale University, New Haven, CT, 1981. 376
1982
23
116
A CORNER FINDING ALGORITHM FOR IMAGE ANALYSIS AND REGISTRATION J.-Q. Fang* and T. S. Huang Coordinated Science Laboratory University of Illinois at Urbana-Champaign 1101 W. Springfield Urbana, Illinois 61801 ABSTRACT In our algorithm the gradient of 8 (the gra- dient direction) in a grey level image is employed for detecting candidates for corner points. Two theorems show that this quantity attains a local maximum at a corner point. Two approaches, median filtering and Hough transform are used effectively to distinguish corner points from noise points. The corner points formed by our algorithm are used in interframe matching. Keywords: Corner Detector, Corner Finder, Median Filter, Hough Transform, Correspon- dence Problem. I INTRODUCTION Several papers have mentioned the importance of corner detection in the analysis of time- varying imagery and stereo imagery. Moravec [77], Beaudet [78], Kitchen and Rosenfeld [80], and Yam and Davis [81] proposed different operators for detecting interest points or corner points direct- ly from the grey values of images and those meth- ods are employed by Barnard and Thompson [80], Dresehler and Nagel [81], and Yam and Davis [81] in disparity analysis of images and image regis- tration. Our corner detector is similar to that of Kitchen and Rosenfeld, but we put it on a solid mathematical basis and found effective ways to cope with the truncation errors in discretization and the noise in real-life digitized images. II CORNER DETECTOR We employ the gradient of 8, the gradient direction of the grey value function f(x,y) as a corner detector. 8 (X,Y) 4 arctan (fy/fx) (2.1) where p is the gradient magnitude 2 P = fi + f2 (2.3) Y *On leave from North China Institute of Computing Technology, Peking, China. We use the local maxima of the product of the gradient magnitude of 8 to be denoted by c and p to characterize corners. For a digitized image, we use the Sobel operator to approximate the derivatives. Theorem 1 For an image containing a step edge along a curve defined by the equation Z(X,Y) = 0 (2.4) if the greyvalues of this image are given by the function / Z(X,Y) f(X,Y) = g(t) dt --03 (2.5) where function g(t) is nonnegative, first-order differentiable, absolutely integratable, unimodal and having a maximum at zero, and if the function z(x,y) has second order derivatives, then the gradients of 8, the gradient direction of grey- value function f(x,y), can be represented in terms of the derivatives of the function z(x,y) as follows: Note: A simple example of the function g(t) is 1 t2 -- g(t) = - 2a2 a& e (2.7) Corollary For an image having a step edge along a straight line defined as in Theorem 1, the gradi- ents of 0 in the neighboring area of the edge are always equal to zero. Theorem 2 We approximate the tip of an ideal corner bounded by two straight edges by a continuous curve. For simplicity, we assume the corner tip is at (0,O) and the two edges of the corner are symmetrical with respect to the y-axis (see Figure 1) ' Then, we can express the curve by 46 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. z (X,Y) = by + p(x) -hsxsh still a corner point near the original point. We Z(X,Y) = ax + by x < -h use separable median filters for preserving edges and corners better and saving time. z(x,y) = -ax + by The merit of the median filter is that it can where p(x) is even and has a second order deriva- eliminate isolated, irregular noise points such as tive which reaches a minimum at 0, and makes z(x,y) spots or highlights on edges. These noise points approximate the original corner within a distance always yield an abnormal value for the corner in the range -h -< x 5 h. If the approximate corner detector. But the median filter approach has difficulty in eliminating false corners which are curve z(x,y) = 0 satisfies the above condition, then the maximum of the value of pc of the grey- value corner near edges. This kind of false corner point occurs due to the truncation errors in discretization. See the upper right part of Figure 2. I ZbLY) f(X,Y) = g(t) dt -co in the vicinity of the corner point is larger than ; cos( f )g(O) where 0 < c1 < n is the angle of the ideal corner and g(x) is the same as given in Theorem 1. Figure 1 Corner and its approximation. III DISTINGUISHING CORNER POINTS FROM NOISE POINTS Because of noise and discretization errors, the candidates obtained by our algorithm contain noise points as well as corner points. We tried two approaches to pick out the corner points from the candidates. A. Median Filtering The first approach is applying median filter in a window centered at a candidate, then calcu- lating the values of c again to check if there is B. Hough Transform Using Hough transform in a window centered at a candidate not only differentiates between the corner points and the noise points, but also acquires more information of the corner, such as the number of edges which form the corner, the slope of each edge and the angle between edges. We made some modifications in the O'Gorman and Clowes [76]' method. The first modification is using thinned gradient magnitude in place of the original one. We employed Cheng's [80] method for thinning edges. The second modification is consi- dering more values of the angle @ in the Hough space than only one value as in the O'Gorman's method. We tried four Hough Transforms: J: the O'Gorman + Clowes' method with Both modifications I: the O'Groman + Clowes' method with modi- fication 1 H: the O'Gorman + Clowes' method with modi- fication 2 K: the original O'Gorman and Clowes' method More precisely, if we denote the gradient magnitude and direction at the point (xi,yi) by W ij and 8 ij respectively , and use the variables r and $J as the distance and the angle in the equation of a straight line x cos I/J + y sin $ = r then we have the following (FORTRAN-like) expres- sions for calculating the values of Hough trans- forms J, I, H and K: J: h(r,$) = h(r,$) + e - JWij 1 /IO , if (x.,yi) is a skeletal pixel after thinning in a map 0 * gradient magnitude. I: h(r,$) = h(r,$) + 1, if (xi,yi) is a skeletal pixel and $ = 3ij. H: h(r,$) = h(r,$) + W.. iJ e - (Wij 1 /io if W.. is above the threshold. iJ K: h(r,$) = h(r,$) + Wij if W.. 1J is above the threshold and $ = 0... iJ Note: in Hough transforms I, K, each point in an 47 image space only maps to one point in the Hough space (r,+), where 4 = O.., while in Hough trans- forms J and H, we take t?i& points near 9 ij into account by including the factor e - 1 d4i j i /lo . IV EXPERIMENTS AND CONCLUSIONS Images for tests Three pairs of images: polyhedra, model planes and robot arms were used. The procedure consisted of three steps: (i) Detec tion of candidates for corner points, (ii) Distin- guishing corner points from noise points, (iii) Matching between the corner points in two frames. The first two pairs are stereo images, whereas the last one is of moving objects. Results of the experiments The candidates found by the corner detection algorithm include most real corner points. However, half of the candidates are noise points. Both of the two methods for distinguishing corner points from noise points worked well. They eliminated 90-95% of the noise points. Matching between the corner points in the two frames was successful. (We used Ranade and Rosenfeld's [80] method for matching.) See Figures 2 and 3. Figure 2. Note: In Figure 2, • I and x represent, res- pectively, the corner points and the noise points recognized by the step 2, distinguishing corner points from noise points. Comparison between Median Filter and Hough Transform By carefully checking the results of our experiments, we found that the Hough transform approach worked better in most cases than the median filter approach. The median filter approach has difficulty with near-edge error points (see the upper right of Fig. 2). The Hough transform ap- proach is sensitive to line-shaped noise as occur- red at the wrist of the robot (see Figure 2). This kind of noise is caused by the saturation of the digital camera. One shortcoming of the Hough transform approach (compared with the median fil- tering approach) is that it is more time-consuming. Parallel processing All formulas employed in our algorithm can be evaluated parallelly and the basic calculation in these formulas is simple, so that it is fitted for multi-processors. Limitation of the algorithm This algorithm has been tested only with indoor scenes containing man-made objects. Because of the sensitivity of the algorithm to noise and other errors, one may have trouble in applying it to pictures of natural scenes or complicated objects. Upper: results of the different approaches to corner finding. q denotes a corner point, x represents a noise point. Lower: results of the point matching between two frames of a robot arm (the robot arm moved), corresponding points are represented by the same number. 48 Figu re 3. Results of point matching between the stereo pair. Upper part: correspond ence between the left and right image a model plane, corner points found by the median filter; Lower part: corre pondence between the left and right image of polyhedra, corner points fou by Hough transform J. Is- md of ACKNOWLEDGEMENT This work was supported by the National Science Foundation under Grant ECS-8112082. REFERENCES Barnard and Thompson [80] S. T. Barnard and W. B. Thompson, "Disparity Analysis of Images," IEEE Trans. PAMI- (1980) pp. 333-340. Beaudet [78] P. R. Beaudet, "Rotationally Invariant Image Operators," IJCPR-78, pp. 579-583. Cheng [80] J. K. Cheng, "Algorithms for Matching Rela- tional Structures and their Applications to Image Processing," Ph.D. Thesis, Purdue Univ., West Lafayette, Indiana, Dec. 1980. Dreschler and Nagel [81] L. Dreschler and H. H. Nagel, "Volumetric Model and 3D-Trajectory of a Moving Car Derived from Monocular TV-Frame Sequences of a Street Scene," IJCAI-81, pp. 692-697. Huang, Yang and Tang [79] T. S. Huang, G. J. Yang and G. Y. Tang, "A Fast Two Dimensional Median Filtering Algorithm," IEEE Trans. ASSP-27, No. 1 (1979) pp. 13-18. Kitchen and Rosenfeld [80] L. Kitchen and A. Rosenfeld, "Gray-Level Corner Detection," TR-887 (April 1980), Computer Science Center, University of Maryland, College Park, MD. Moravec [77] H. P. Moravec, "Towards Automatic Visual Obstacle Avoidance," IJCAI-77, p. 584. O'Gorman and Clowes [76] F. O'Gorman and M. B. Clowes, "Finding Picture Edges Through Collinearity of Feature Points," IEEE Trans. on Computers, C-25 (1976) p. 449. Ranade and Rosenfeld [80] S. Ranade and A. Rosenfeld, "Point Pattern Matching by Relaxation," PR-12 (1980) pp. 269-275. Yam and Davis [Sl] S. Yam and L. S. Davis, "Image Registration Using Generalized Hough Transform," Proc. IEEE Conf. Pattern Recognition and Image Processing PRIP-81, Dallas, Texas, August 3-5, 1981, pp. 526-533. 49
1982
24
117
THE ROLE OF EXE POSITION INFORMATION IN ALGORITHMS FOR SJXREOSCOPIC MATCHING KFrazdny Laboratory for Artificial Intelligence Research Fairchild Instrument and Camera Corporation 4001 Miranda Avenue, MS 30-886 Palo Alto, California 94304 AE3srR4cr. The viewing system parameters probably have to be expli- citly taken into consideration in formulating computational solutions to the stereoscopic matching problem in binocular vision. An algorithm for computing the directions of gaze from purely visual information without a prior solution to the correspondence problem is outlined. It would seem that the availability of such (or similar) algorithm is a precondition for any stereo matching process using the epipolar constraint. 1. Introduction. The structure of a 3D scene can be uniquely determined from two projections taken at two spatially separated loca- tions. This fact and the underlying geometry is the basis of binocular stereopsis. Conceptually, the problem has been divided into two sub-problems: [II one has to solve the correspondence problem which, when succesfully accomplished, enables one to 121 compute the relative orientation of the two view direc- tions (the orientations of the two principal rays, that is), and the structure of the viewed scene (depth relation- ships between various locations in the scene). In this paper it is argued that an algorithm based on this par- ticular serial decomposition of the problem is flawed; eye orientation parameters probably have to be computed before an attempt to solve the correspondence problem. The origin of the present analysis is the belief that the major goal of binocu- lar stereopsis is to obtain absolute depth information (i.e. dis- tances scaled by the interocular separation) in a near space (i.e. at distance up to about 1-2 metres) for the purposes of fine hand-eye coordination. This specification of the goals of binocular stereopsis, while certainly not novel leads, however, to reformulation of the problem. This is due to the fact that absolute depth cannot be computed without accurate informa- tion about viewing system’s parameters. Because the eye posi- tion information from extraretinal sources is assumed either to be unavailable to the visual system, or to contain errors too large to be useful1 (the experimental evidence seems to show that only very rough information about eye position and motion is available [see e.g. Bridgeman, 1980]), one should look for a way to compute the directions of gaze from purely visual infor- mation. The problem of computing the “camera parameters” is conceptually not very difficult and has been investigated in the past (e.g. Panton, 1977; Gennery, 1979; Longuet-Higgins, 1981). The problem with this classical approach is the requirement that the correspondence problem be somehow solved for some minimal subset of image elements. The eye orientation param- eters are then computable with a precision that depends mostly only on the accuracy with which the position of the image “points” can be specified. 2 The epipohr axmtraint. A large amount of literature on stereo vision has been devoted to the problem of establishing a “point-to-point” correspondence between the left and right images to extract the disparity information (e.g. Dev, 1975; Julesz, 1963, 1971; Nelson, 1975; Marr and Poggio, 1976, 1979; Mayhew and Frisby, 1981). The usual approaeh is exemplified by Mar& statement that “because the two eyes lie horizontally, we need consider only all the possible matches along horizontal lines; therefore, we can reduce the problem to the simple one-dimensional case...” (Mar-r, 1982, p.116). This statement is unfortunately true only in a special case when the directions of gaze are (nearly) parallel, i.e. when the eyes fixate a distant object. In general, the epipolar lines refered to above are neither hor- izontal nor parallel (see Figure 1). The epipolars refer, in this context, to a set of pairs of straight lines such that all points on a straight line in one image have matching points along a straight line in the other image.A planar projection surface is assumed here. In the case of a spherical retina, a straight line would correspond to a great circle. The idea of an epipolar line pair is a powerful computational constraint which also seems to have a strong physiological support. For example, in a recent paper, Kroll and van de Grind (1980, p-667) succesfully argue that the interactions between binocular neurons are lim- ited to neurons with the same elevation specificity. While elevation here means presumably a horizontal scanline another, more plausible, interpretation is possible: elevation could be interpreted to mean the appropriate epipolar line. The orientation of epipolars in the image plane is a function of the orientation of the directions of gaze (see section 4 for detail). In automatic photo interpretation or machine vision, the problems are usually approached without explicit concerns about biological constraints. The stereoscopic matching is con- ceptualized as a genuine two-dimensional problem (see e.g. Barnard and Thompson, 1980). In many cases, the camera orientaions are available a priori. If not, a common approach is to apply an “interest operator” which extracts a set of (mono- cularly) distinguishable (ususally in terms of grey level distri- bution) features in each image. The correspondence problem may be solvable for this set which in turn makes the “camera calibration” (computing the viewing system’s parameters) pos- sible. Unfortunately, this approach breaks down in situations where it is not possible to solve the correspondence problem for a subset of image “tokens” based on the uniqueness of cer- tain features in the monocular input. An example are random dot stereograms. The ability of the (human) visual system to establish correspondence in this case implies the existence of a mechanism based on different considerations. To conclude, although it is well known that horizontal disparities are by themselves uninterpretable, what is less commonly realized is that they cannot be even measured unless some information about eye position is explicitly used to guide the stereo-matching process. Previous conceptualiza- tions of the task have (usually) seen the computational process From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. as decomposeable into two serial modules: first extract the disparities, and then interpret them. This cannot work because in order to solve the correspondence problem utilizing the epipolar constraint one needs accurate information about the direction of gaze. This information cannot be obtained from extraretinal sources and has to be computed from visual information. Rut in order to do this one is supposed to have solved the correspondence problem for some minimal subset of Image feature points. This is the famous “chicken and egg” problem ln another disguise. It is interesting to note that the problems of this class arise frequently in the investigation of visual perception. For example, the computation of “shape- from-shading” and obtaining the direction of illumination seem to be similarly interrelated. 3. Pcrssiblesohdions. Three solutions to the problem identified above present themselves immediately. L13 PI L31 One can eliminate the epipolar constraint completely and conceptualize the stereoscopic matching in a way similar to motion computation in monocular motion parallax situation, i.e. as a genuine two-dimensional matching problem (see e.g. Barnard and Thompson, 1980). One may try to develop a scheme in which it is possible to make the stereoscopic matching and the computation of directions of gaze run concurrently as an intrinsically cooperative process. One way to do this may be to make the matching process a function of the eye orientation. The third possibility is to compute the eye orientation angles prior, and independently of, the subsequent match- ing process which may then use the epipolar constraint. In section 5 it will be shown that it is, theoretically at least, possible that an algorithm of this latter class, directed by the global requirement that the directions of the gaze of the two eyes must be the same which ever points are used to compute them (this is a very strong global constraint holding over the whole image) could be used to accomplish this. The computa- tional complexity of the proposed algorithm can be enormous if the minimal subset of feature point pairs (potential matches) needed to compute candidate directions of gaze is large. In the next section it will be shown that when some simple assumptions are made, this subset can be as small as 2 or 3 feature points. It will be seen that under these circumstances, the whoie process can usefully be conceptualized as maintain- ing a l-dimensional histogram and locating its highest peak. 4 computing the airections of gaze Refer to Figure 2. It is assumed that the eye moves in such a fashion that the results of its motions can be described by a rotation utilizing only two of the possible three degrees of freedom (Donder’s law). In our case, the description consists of rotations about an axis perpendicular to a common plane which itself is restricted to rotations about the interocular axis (vec- tor p in Figure 2). An image point pi and the interocular distance Q Or define a plane which cuts each image along an epipolar line. In the case- of a planar projection surface, these two lines are straight lines. All epipolars in an image meet at one common point. The locus of zero “vertical” disparities coincides (in our case) with the image x-axis. Refering to Figure 2 it is easy to see that 7, =(sin&O,cosfI) $ = (-sina,O.cosa) @L = (Os41) 9!$ = (cosa.O,sina) % = (1,W) jr = jL = (0.1.0) (the focal distances are assumed to be 1). After some algebra it is seen that the x-coordinates of the epipolar intersections are specified by (ta@,O) and (tan(a+@),O) in the left and right image coordinate frames respectively (the y-coordinates are zero). Within this geometry, the eye orientation parameters (the angles a and 8) are computable from the correspondences defined at three (the fovea and two arbitrarily choosen) image points, and solving a single fourth-degree equation. To see this, consider a (left) image point p’i, with a corresponding point in the right image p”i. The transformation from the right to the left image coordinate system is define simply as Pi’fii’=(Pi”&ff+T)A (1) This says that the vector Pi pi’ is ‘=Pi ‘Pi’ (Pi’ is the magnitude and th e unit direction vector) defined in the left coordinate system, is abtained from its counterpart, defined in the right coordinate system, by translating and rotating it respectively. T=I? where I is the interocular separation. The rotation A is determined by the matrix A [co%(a) = I 7 sh$a) -sin(a) 0 cos (a) I Equation (1) can also be written as Ps’~~‘A-‘=(Pj”Pi”+it’) (24 Multiplying equation (2a) by the unit vector pi” (vector pro- duct is denoted by a “xl’), and scalar multiplying (denoted by a “.“) the result by ? produces Pt’(Pi’A-’ X Pi”) . “?=O @b) This means that (?i’A-’ x Pi “) is perpendicular to ?. Conse- quentl , J the vector Fj=$i’A-’ X Pi”) X ($,‘A- product ’ x Pi “) points in the direction e same must hold for any pair of points, ni =(Pj’A-’ X Pj “) X (&‘A-’ X &“). Thus i.e. Klij X njk =O (3) It is advantageous to chose point pi to be the fixation point, i.e. #i’=(O,O,l) and $iil’=(O,O, 1). [It is assumed that the principal - . rays through the two centres of Expanding (3). and after a good fovea do indeed intersect.] deal of algebra, we obtain rxc4+bz3+cz2+dz +e =0 (4) where x=&%(a), and a, b, c, d, and e are some constant ;zM$&$. xT$is, is(ob;;(~;o~‘$~={sin(a),O,cos (a)) so .,I- . . ua ion (4) has a closed form Glution an: four distinct’ roots, in general. An important point to note is the fact that equation (3) does not involve radial distances, and that it holds for all points on the epipolar line (i.e. if equation (4) applied to points j and k results in a value x, the same value x is obtained using points j and 1 pro- vided k and 1 lie on an epipolar line in the right image). To compute the eye orientation angles (and thus the slope of the epipolar lines on the image surface) one may use the enormous amount of redundancy afforded by the global con- straint that the directions of gaze have to be the same what- ever set of points is used to compute them. The following sim- ple algorithm accomplishes this. [l] For each pair of points p’i ,p> in the left image, consider all possible matches (within some neighbourhood) in the right image, and for each such possibility, compute the value sin. (a) as outlined above [equations (3) and (4)]. [2] Make a histogram of those values of &(a) lying within the allowable range (e.g. the principal directions cannot point behind the head, nor even too much sideways). If a particular value of Sin(ol) occurs n-times, the histogram value till be n. In practice, because of errors of various nature, one would update the frequencies within some neighbourhood of-a particular value of sin (a) (according to some, perhaps gaussian, weighting function). Chose the value under the highest peak of the histogram as the true value of sin(a). compute the angle /3 (from 2b), and from this the orientation of the epipolar lines. It is clear that because of the more or less random nature of the competing “incorrect” matches (i.e. matches not lying on the correct epipolar lines), the histogram will indeed peak above the correct value of sin (a). The method, in its present form, has several shortcom- ings. One is its “pointilistic” way of treating the images. The assumption that an image can usefully be described by point descriptors is valid in the case of a random dot stereogram but may not be very useful in the general case. The second problem is exemplified by the disparity field in Figure 1. While the disparities associated with elements of the frontal surface are within the fusable range specified by the Panum’s area, the disparities associated with the background surface are clearly too large (this is why the vergence eye movements are essential to fusion). Consequently, if the image as a whole writes into the same histogram, and there are regions with disparities larger than the testing neighbourhood used in step [l], one may encouter problems. One solution may be to keep local histograms and test for agreement. This in turn suggests an intriguing possibility of segmenting the image on the basis of binocular combination before any depth perception. [This problem may turn out to be a non-problem in view of some evidence suggesting that binocular interactions can occur between points separated by as much as 7 degrees of visual arc (see Marr, 1982, p. 154).] The proposed algorithm is local and parallel in nature: the computation of a, and the histogram updating can easily be done at each image locus at the same time and independently of computations at other retinal loci. It should also be observed that nothing has been said about the classic ambi- guity problem; it is orthogonal, in a sense, to the question of ‘the orientation of the epipolar lines on the projection surface. False matches along an epipolar cannot be resolved without employing some other considerations (see e.g. Marr and Pog- gio. 1979). Once this has been done, however, the computation of absolute depth is, conceptually at least, trivial. 7. conclusifm. The discussion in the beginning of this paper is related to the concept of the “plasticity of retinal correspondence” (Nel- son, 1977): two loci on the two retinae do not signal a fixed disparity, the disparity signalled is a function of eye position (Richards, 1971). This modulation cannot, however, be a simple function of eye position obtained from extra-retinal sources: we argued that a precise information not obtainable from these sources is essential. Such conclusion is a direct consequence of formulating the goal of binocular stereopsis as obtaining an absalute depth information about the near space and the avail- able evidence that the epipolar constraint is used in stereos- copic mat&kg. A simple algorithm for computing the eye orientation parameters was outlined to suggest a way in which a visual sys- tem could obtain accurate information about the orientation of the epipolar lines on the projection surface from purely visual sources before it attempts to solve the correspondence prob- lem. The availability of such an algorithm is a precondition for any stereo matching algorithm using the epipolar constraint. Ac~edgement.. I wish to thank Prof. John Mayhew and Dr. J.M.Tenenbaum for their comments and discussion. Barnard ST. Thompson W.B. 1980 Disparity analysis of images IEEE nuns. Pat.An. Much.Intell. 4, 333-340 Bridgeman B. 1980 Direct perception and a call for primary perception The Brain and Behavioral Sciences, 3, 382-383 Dev P. 1975 Perception of depth surfaces in random-dot stereograms Int. J. Mun Mach. Studies 7, 511-528 Gennery D.B. 1979 Object detection and measurement using stereo vision Proceedings DARPA Image Understanding Workshop 101-107 Science Applications Inc. Arlington, Virginia Julesz B. 1963 Towards the automation of binocular depth perception Proceedings IFIPS Congr., Munich P962 ed.C.M.Popplewel Amsterdam: North Holland Julesz B. 1971 Foundations of Cyclopean Perception Chicago: The Uniyersity of Chicago Press fioll J.D. Grind W.A. van de 1980 The double-nail illusion Perception 9, 651-669 Longuet-Higgins H.C. 1981 A computer algorithm for reconstructing a scene from two projections Nature 293, 133-135 Marr D. 1982 l&ion San Francsico: Freeman Marr D. Poggio T. 1976 Cooperative computation of stereo disparity Science 194, 283-287 Marr D. Poggio T. 1979 A computational theory of human stereo vision Proc.R.Soc.Land. B-204, 301-328 Mayhew J.E.W. Frisby J.P. 1981 Psychophysical and computational studies towards a theory of human stereopsis Arfi.cia.l Intelligence 17, 349-385 Nelson J.J. 1977 The plasticity of correspondence J. 7h.eor.Bi.ol. 66, 203-266 Panton D. J. 1977 A Aexible approach to digital stereo mapping Rho togrumme tric Bngineering and Remote Sensing 44, 1499-1512 Richards W. 1971 Anomalous stereoscopic depth perception J: @t.Soc.Am. 61, 410-414 3 Figure 1. A disparity map produced by viewing two slanted planes in depth The observer fixates the front surface located 6 interocular distances away (approximately the reading distance). The distance of the background surface is 8 interocular separations. The field of view is 20 degrees of visual arc. The image is 1000x1000 pixels. Vertical disparity ranges from about -50 to +50 pixels. FP Figure 2. Binocular geometry used in section 4. The two coordinate frames centered at 4 and 0,. (the nodal points of the left and right eye, respectively) are fixed with respect to the retina and are constrained to move such that 9~ =‘&., and *r, c?$, 4, & lie in the same plane (i.e. obeying the Donder’s law). c?$ and sr are the principal rays (fixation directions). FP is the fixation point. The (mutual) orien- tation of the two eyes (image planes here) is determined by the angles LX and 8. 4
1982
25
118
REPRESENTING SMOOTH PLANE CURVES FOR RECOGNITION: IMPLICATIONS FOR FIGURE-GROUND REVERSAL* D. D. Hoffman & W. A. Richards Artificial Intclligcncc I .ahorntory and Dcpartmcnt of Psychology Mass&usctts Institute of’I’cchnology Cambridge, Massnchusctts 02139 A rcprcscntntion of smooth pldnc curves for recognition is proposed. The basic rcprcscntation IS a linked list of four primitive shapes, called “codons”, which arc invariant under rot&ltions. translations and unifonn scaling. Psychophysical observations regarding the pcrccption of figure- ground reversals are prcscntcd to suggest that a similar representation could bc used by the human visual system. I lN’I’ROI~UCTlON A vision system somctimcs must compute its initial descriptions of a shape without bcnclit of context. Yet these descriptions should highly constrain the set of posslblc matches with memory if they arc to be useful for recognition. For cxaniplc. though one cannot rcasonnbly predict the contents of figure 1 prior to seeing it, the shapes arc readily rccogniLcd. This simple demonstration implies the cxistcncc of context- indcpcndcnt rules tllat provide shape descriptions which can bc used to initi‘itc the recognition process. Such rules for smooth plant curves are the subject here. To be useful for triggering the recognition process the initial rules should be computable on images, should 1 icld descriptions which are invariant under translations, rotations** and uniform scaling and should provide a first index into a table of shap’cs in memory. Although a plane curve r(s) = (z(s), y(s)) can bc specified in many diffcrcnt ways, a description based upon its curvature K(S) is attractive. Such a rcpre- scntation satisfies two of the invariance conditions, namely translation and rotation indcpendencc [2]. For any rotation 0 and translation (u, v), r(s) is uniquely given by -i(s) = (J cos~(s)ds + u, J sin $(s)ds + 21 > , where +(s) = Jws + 0 However, because curvature is scale dependent. a means for repre- senting K(S) in a scale invariant manner still must be sought. In ad- dition the representation itself should not be a continuous function if it is to ‘serve as an index into memory. Kather, the representation should provide aI1 articulation ofn(a) into units which can bc described qualit;lti\cly. A r~ason~~l~lc approach is to exploit singular points of orders 1 and 0, i.e. maxiina, miIlilni1 and Lcrocs of curvature [3. 41, since the property of being a singular point iE in\,arinnt under rotations, translations and uniform scaling. Figure 1. Some shapes rccogniLable without benctit of context, “l‘his paper describes research done at the Artificial Intelligence I-aboratory and Psychology Department of the Massachusetts Institute of ‘I’cchnology. SuJ>port for this rcscarch is prokidcd in part by the Advanced Rcscarch Projects Agency of the Dcpnrtmcnt of Dcfcnsc un- dcr Oficc of Naval Rcsc,uch contract N00014-SO-C-0505 and by NSF and AFOSR under grant 7923110-MCS. D.D. Hoffman was also sup- ported by a Hughes Aircraft Company doctoral fellowship. **More prcciscly, the cnrly descriptions should dccouplc the position, rotation and ovcrnll scaling of a shape from the shape itself. ‘l‘his allows the position, rotation, scaling and shape to bc made explicit scparatcly, and allows their cffccts on the recognition process to bc discntanglcd. Thus the goal of rotational invariance for shape descriptions, for ex- ample, in no way implies that the rotation of a shape CilIlllOt affect its intcrprctation [l]. II MAXIM 2 MINIM/~ ANI) CUKVI-‘“OI~II~N’I’A’I‘ION” AL. ---L - .-- ~_ --L ---1-- -.-- __ Which points arc maxilna or minima of curvature dcpcnds on the orientation of tllc CuI’vC. ‘l’hough in gcncral cur\ aturc ic an unsigned quantity, in the cast of plant cur\cs it is J>ossiblc to aG?n a sign to the curvature con+stcntly once one of the t\j’n possible orientations for the curve is chosc11. The oric‘ntation is usually spcclficd in figure\ by an arrow on thc~ curve pointin, n in the direction in which the curl c is to bc tra\,crscd. Ijy a cl~mgc in orientation of 3 J~~IIC curve the sign of tbc curvature chnngcs cvc~~y~sl~rc a1011g the cur\c. In J)articuJar maxima bcco~nc minima and vice-versa. ‘1‘1~ con\ cntion adopted hcrc is that figure is to the left and ground to the right as the curve is travcrscd in an oricntiltion. ‘I’hus knowing which side is figure dctcnnincs the choice of orientation on a curve or, convcrscly. choosing an orientation dctcr- mints which side is figure by con\cntion. Minima in-c rhcn typically associated uith concavities of the figure, maxi1n.r with convcxitics (SW figure 2). It is J,ossiblc howcvcr for minima to have positive curvature, as in tlic case of co~lvcx closed curves, or maxima to have ncgativc cur- vature. as ~11~11 the oricntrltion of the convex closed curve is rcvcrsed. 5 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. 31 2: Figure 2. Minima of curvature (indicated by slashes). Arrows indicate curve orientation. III SEGMENTATION Maxima, minima and zeroes of curvature arc all candidate points for partitioning a curve into kinits in a manner invariant under rotations, translations and uniform scaling. To choose among them WC require that the units should reflect natural parts of shapes [5, 61. Fortunntcly, when 3-I) p;rrts arc joined to crcatc complex objects concakitics will generally bc crcatcd in the silhoucttc. Scgmcntation of the image at concavities thcrcforc immcdiatcly cncodcs in a straightforward manner an important PJO~CJ~Y of tl!c natural world that is not captured by max- imas or 7crocs of curvature. This is our ge1fe177Zpusiliu~~ argument for segmentation at minima of curvature.* Figure 3. Joming parts generally produces concavitlcs in the resulting silhouette. 1V Dt’SCl~IPTION OF PAl~l’S~ CONTOUR CODONS ---A Minima of curvature arc used to break a curve into scgmcnts, whcrcas maxima and zcrocs arc used to describe the shape of each scg- mcnt. There arc four basic types of scgn~cnts. which WC call “contour codons”. I’urthcrmorc, only certain codon joins (pairwisc connections) arc allowable. First, all curve segments contain 7~~0, one or two points of zero cur- vaturc. (This assumes that when K(.s) = 0, K’(S) # 0). Scgmcnts with no %CJOCS arc called type 0 codons, those with two zcrocs arc called type 2 codons. If a scgnlent has cxactty one ZCJO, the zero may be cncountcrcd either before (type l-) or nfter (type l+) the maximum point of the scgmcnt when traversing the curve in the chosen orienta- tion. Thus thcrc arc four basic codon Lypcs (figure 4). 1 Figure 4. Contour codons (as dcfincd in section indlcatcd by do& on the curves, minima ‘by slashes. 4). Zcrocs of curvak~re are If desired, codons can bc further described by noting the positions of the maximum and any ZCJOCS of curvature within the scgmcnt, nor- malizcd by lhc total arc length of the scgmcnt. This should be done qualitatively at first and then more quanGtativcly as is ncccssary. For example, label the first minimum encountcrcd in traversing a segment the “tail” of the scgmcn!. and the other minimum label the “head”. Then the position of the maximum can bc given crudely as much closer to the head, much closer to the tail, or approximately in the middle. This gives the “skew” of the curve. Zeroes can similarly bc described as closer to the maximum point, closer to the head/tail, or approximately in the middle bctwccn the maximum and the head or tail. Figure 5. Cune segments with identical smgularities but different shapes. As shown in figure 5. two segments can have identically placed max- ima and zeroes, identical curvatures at the maxima and minima, and yet appear quite different. The difference is the behavior of the curvature bctwccn the singular points. This behavior can be dcscribcd in an ap- propriately invariant manner by the integral of curvature between each of the singular points: I b n(s)ds = B(b) - B(a), a where B(s) is the angle of the tangent at r(s) given by B(s) = tan-‘(y’(s)/z’(s)). A p re rcscntation which notes the integral of cur- vature between the singular points will give different descriptions for curves A and B in figure 5. There are restrictions on how codons may be joined at minima. Define a codutl @in by the operation a o b, a, b E (0, l-, l+, 2) indicating that the head of a is smoothly connected to the tail of b. Note that in general a o b # b o a and hence the codon sequence is critical. Not all c’onccivable codon joins are possible (figure 6). The fact that not all conccivablc-codon joins iiJC allowable suggests that the codon rcprcsentation may bc amenable to error correction tech- niques. Consider, for example, the codon string . . . cj-1 cj c3+1 . . . . If all codon joins wcrc allowable then cj could take any one of the four values (0 l+ l- 2) rcgardlcss of the values of ~~-1 and cJ+l. Thus the value of Cj would be indepcndcnt of its context. Using figure 6, however, one can show that in actuality the context of a codon restricts its range of possible values to two on average. One can also show that in one third of the contexts cI is actually uniquely determined. *When gcncral position is violated special rules may bc nccdcd to parlition and dcscribc the resulting image contour [S].. This will not be considered here. 6 0 1- 1 + 2 0 1- 1 + 2 + m + F iigure 6. Table of allowable codon joins Rows and columns are labclled by odon type, with the intended codon join scqucncc at cxh table entry being (row ype, column type). Legal joms are indicated by a +, others by a -. V RF,1 .A’TION TO PEKCI:PI’ION The rcprcscntation of plane curves for recognition proposed hcrc can explain the well known observation that a curve can look very diffcrcnt dcpcnding on which side is perccivcd as figure and which as ground [7]. (SW figure 7a). ‘l’hc explanation is that a curve looks diffcrcnt because its rcprcscntations under the two possible orientations arc com- plctcly difTcrcnt. Since the positions of the minima of curvature arc not invariant under a change in oricnt;ltion (direction of traversal) of a curve, the PJI ts of a curve as spccificd in its rcprescntation can bc quite different for the two orientations. (SW figure 7b). If one chnsc to define parts by zcrocs of curvature [8], or by minima nrzd maxima [3, 4, lo], the parts would not differ under a change in orientation of a curve. ----------e--m I 0 m---e---- b 1’ 1+ 1’ 1+ codon shng : 1+21-1+01- codon string : 01-1+022 1 Figure 7. (a) Figure-ground reversal makes bounding curve appear different. (b) Diffcrcnt codon string descriptions are assigned to a curve in its two orientations. The perceptual significance of minima can also be demonstrated by a simple modification of Rubin’s ambiguous face-goblet illusion [9]. If the locations of minima are indicated by occlusions, then the perception of the curve is biased toward one of its two orientations and either the face or goblet impression becomes more apparent (figure 8). However, when this figure is viewed at a distance so the added lines are not visible, then the classical instability returns. Neither highlighting the zeroes nor highlighting the maxima has comparable effect because they do not correspdnd to natural points for segmentation. Figure 8. Rubin’s face-goblet illusion segmented at minima to reduce the in- stability. VI LOCK-KEY AN11 MIRROR IUWERSAI, TRANSFORMS ‘l’hc codon rcprcscntation is designed to decouple the shape of a con- tour from its disposition in space and its overall size. Conscqucntly the shape description is invariant under rotation, translation and uniform scaling of the contour. llowcvcr, as dctnonstratcd in figure 7b, the shape of a contour is not invari,lnt when the direction of traversal along the contour is rcvcrscd. It is not difficult to convince oncsclf also that the shape description is not invariant under a mirror reversal of the contour. The question naturally arises, Arc thcrc simple rules that define how the codon string of a contour is transformed when the contour undergoes a mirror rcvcrsal or a reversal in direction of traversal (change in oricnta- tion)i In the case of a mirror rcvcrsal the rule is quite simple. The mirror transform of a codon string is obtained by rcvcrsing the direction in which the string is read (right to left rather than left to right) and revers- ing the sign attached to each type 1 codon. Thus the mirror transform of < I+ 2 l- l+ 0 l- > is < l+ 0 l- 1+ 2 l- >. This rule can also bc used to find symmctrics within a single contour. If, for cxamplc, one half of a codon string is found to bc the mirror transform of the rcmaindcr of the string, a necessary condition for the curve to be symmetric has been found. Note that this applies to skew symmetry as long as zeroes of curvature arc not made to appear or disappear by the skew. When the sense of traversal of a curve is reversed the codon string transformation rule, called the lock-key transform, is unique but apparently not simple. It is perhaps most easily specified as a map from pairs of concatenated codons to codon singletons. The codon doublets which map to each codon singleton are ((00, 01+, l-0, l-l+) t--+ O}, {(Ol-, 02) H l-}, {(lt-0, 20) i--r If} and {(l+l-, 1+2, 21-, 22) H 2}. This lock-key mapping can: be used, for example, to transform each of the codon strings of figure 7b. into the other. 7 VII SUMMARY An approach to the representation of plane curves for recognition has been sketched. It is suggested that minima of curvature can be used to break a curve into parts, and that maxima and zeroes can be used to describe the parts. This approach explains why a curve can appear quite different when figure and ground are reversed. Extensions of the approach to piecewise-smooth curves are presented in Hoffman and Richards [ll]. Extensions to surfaces are desirable. ACKNOWLEDGEMENTS We thank J. Rubin, J. Scheuhammcr, S. True, K. Stevens and A. Bobick for comments on previous drafts. REFERENCES [l] Rock, 1.: The perception of disoriented figures, Sci. nrjt., 230, 1, 7% 85 (1974). [2] 110 Carmo, M.: D&7hwtial geontelrqt of curves and surfices, Prcntice- Hall, New Jersey (1976). [3] Attneave. F.: Some informational aspects of visual pcrccption, I’sychol. Rev., 61, 183-193 (1954). [4] Duda, R., Hart, P.: Patkrn chssijcarion and scene analysis, Wiley, New York, 337-341 (1973). [5] Marr, D.: Analysis of occluding contour, Proc. K. Sot. Land 13. 197, 441-475 (1977). [6] Marr, D.: Vision: a compulational invesiigazion in/o lhe human repro- senrarion and processing of visual infkmation, Freeman, San Francisco (1982). [7] Attncave, F.: Multistability in pcrccption,Sci. &z., 225, 6, 63-71 (1971). [8] Hollcrbach, J.: Hierarchical shape descripiion of objects by selection and modification of prototypes, Massachusetts Institute of Technology AI-TR-346,47-51(1975). [9] Rubin E.: Figure and ground, In D.C. Bcardslcc & M. Wcrtheimer (Eds.), Readings in perccplion, D. Van Nostrand, New York (1958). [lo] Brady M, 1982 “Parts description and acquisition using vision” Proceedings of the Society of Photo-optical and Inslrumentation Engineers [ll] Hoffman D, Richards W, 1982 “Representing plane curves for visual recognition” ANT AI Memo 630
1982
26
119
WHY PFRSPEC?I!IVE IS DIFFICULT: HCW 'IWO AIXORI'IW'IS FAIL* John R. Kender Computer Science Department Columbia University New York, NY 10027 ABSTRACT Attempting to derive image algorithms solely under orthographic projection is deceptively easy. However, orthographic algorithms often fail when applied to the perspective case. More critically, since many things simplify under orthography, such algorithms often give no evidence for their proper extension. This paper gives two such examples, showing both the problems that arise for them under perspective, and the suprising extensions that they require. 1. THE EDGE aJRVA!IU.RE-GRAD- SPACE RETATIO~ Given two planar surfaces meeting at a common edge, there is a particularly simple but highly useful relationship between their gradient space representation and the concavity or convexity (with respect to the observer) of their joining edge [3]. It forms the basis of much qualitative and quantitative shape recovery work [l]. It is rederived here, using some of the simplifying methods described in [2]. As it stands, this edge curvature rule fails under perspective. Newly rederiving it under a perspective imaging geometry is useful for two reasons. First, it analytically extends some of the existing theory concerning the dual-space nature of the gradient space to case of perspective geometry. Secondly, it indicates some of the difficulties that can occur in extending other orthographic relationships. 1.1. Under Orthoaraphv: The Rule Rederived Since edge curvature is a relative measurement of the scene, both the rotational coupling of the gradient space and the translation invariance properties of the back-projection under orthography can be employed. All such problems can then be solved in the single standard position illustrated in Figure l-l. *This research was sponsored in part by the Defense Advanced Research Projects Agency under contract N00039-82-C-0427. Figure l-l: The standard configuration for analyzing curvature of edges under orthography. The two planes are of the form z=plxtqly+cl and z=p2x+q2y+c2. Since the edge they meet on is in the plane x=0, their simultaneous solution in that plane requires that ql-q2 and cl=c2. The equality of the q values implies that the gradients of the two planes must lie on a horizontal line in the gradient space. Given the rotational coupling of the space, this is the equivalent of stating that the line joining the gradients is perpendicular to the image line. This confirms analytically the somewhat more elegant derivation of the same result by Mackworth, based on the dual nature of the gradient space. If the planes meet to form a convex edge, then that edge will have a convex profile when cut by any plane not parallel to the edge itself. Similarly, concave edges generate only concave profiles. It follows that the converse is also true: a given profile also uniquely determines the curvature of the edge itself. Consider then the profile of the two above planes determined by a slice along the plane y=0. The profile, in Figure l-l, shows the cross sections z=qx+c and z=p2x+c, where c is the common C value. There are four possible physical arrangements of two half-planes in three-space. :‘*lever, only two arrangements give rise to an :in.,>ge having two visible planes, one on either side (~1 the edge line. These are also shawn in Figure 1 -." 1 1.. 9 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. For the upper arrangement to be convex requires that the right half-line have a larger slope that the left; that is, pl>p2. Further, by the imaging geometry, the value pl must have been derived from the equation describing the right hand surface. It follows that a convex edge generates the constraint that the p value of the right-hand surface must exceed that of the left-hand surface: they are in the same relative left-right order in the gradient space as the surfaces are in the image. This ordering relationship is independent of orientation or translation. b The analogous relation for concavity also holds, derived frcm the bottom arrangement of half-planes. Again, P+P~, but the imaging geometry requires that pl is derived from the left-hand plane. Under this arrangement, the relative ordering of gradient values is reversed. Thus, the rule of Mackworth has been analytically rederived. 1.2. cu The edge curvature relations do not hold under perspective, as the illustration in Figure l-2 ShOWS. In it, a bird'seye view of the imaging situation is given, along with the perspectively distorted image that results. X Figure l-2: The failure of the edge curvature rule under perspective. Suppose the edge curvature rule applied directly. Since all the surfaces labelled with like letters are parallel, they rrap into the same gradient point. (There is enough information in this image so that, with minimal assumptions concerning parallelism, the exact surface orientation of the surfaces can be analytically derived; =e VI.) The gradient space thus consists of only two points, A and B. Since every edge is parallel to the y axis, the two points must lie on a horizontal line in the gradient space. However, it is impossible to assign a value to either point A or B and still uphold the edge curvature relation. This is because the image contains all four possible ccmbinations of either concave or convex edge, and either left-right or right-left image ordering of A and B; any assignment would violate two of them. (Under orthography, the rightmost plane B would not be visible, being obscured by the rightmost A; the edge curvature rule would then apply.) 1.3. Under Perspective: The Rule Repaired Although the rule fails, the cause is not in-mediately clear. It does, however, seem to be related to the fact that some of the surfaces that are invisible under orthography can be seen under perspective, due to non-parallelism of the imaging rays. The edge curvature rule is again derived here, this time in the context of perspective, following the above derivation under orthography. Note that under perspective, only the rotational coupling of the gradient space can be employed, however; the result will necessarily be dependent on the position of the edge line in the image. The standard position is illustrated in Figure l-3. Y t I I i Ai edge at x=k B --t--rx -I- Figure l-3: The standard configuration for analyzing curvature of edges under perspective. The two planes are again of the form z=p1x+s Y+clr Since they meet on the pit ure i and z=p2x+q y+c2. line x=k, tl! ey must meet within the image plane x=kz. The equation of their intersection is straightforward but complex. It can be written in the two-point form of a gradient space line as (q-W(q-l/k) = (03) /U/k-p21 . This implies that the gradients of the two planes must lie on the same line through (l/k,O) in the gradient space; that is, through the dual (the gradient) of the plane z=(l/k)x. (As perspective degenerates to orthography, k approaches zero, and the gradients lie on a line parallel to the p axis: the result under orthography.) Again, this confirms the result of Mackworth, derived using the dual nature of the gradient spce. 10 'urther, the plane z=(l/k)x is the one member oi the parallel family z=(l/k)x+0y+c that psses through the origin: it forms their common vanishing line by its intersection with the picture plane. This plane through the origin is the "interpretation plane" of Mackworth [3]. Thus, the dual of the interpretation plane of the edge under perspective can be obtained by considering the edge to be a vanishing line. If the fdge line is written as G.P=l, its dual is G/I IGlI . Now consider the profile of the two surface planes determined by a slice along the plane y=0. Figure l-3 shows the cross sections. As before, the curvature of the profile determines the curvature of the edge, and only two arrangements give rise to an image having two visible planes. Note, however, the critical difference under perspective: edge curvature must be measured with respect to the h ti siuht to the edge, as the irrraging rays are no longer parallel to the y axis. Explicit analysis can be avoided by means of the following geometric argument. The gradient space is panned about the y axis so that the edge lies on the positive z axis, the gradient relations are established in the new coordinate system, and then the gradient space (with the relations) is panned back. The angle of pan is given by tan0 = k. Having the edge on the z axis is the orthographic case. Suppose the edge is convex. In the panned gradient space, the gradient of the right-hand plane must be greater than the left-hand plane. Wwever, when the gradient space is panned back, :he relation necessarily has a different form, c iven that one or both gradient points may have "wrapped around through infinity". The gradient of those planes with infinite p value in the panned coordinate system also wrap around, to the point (l /W) l Therefore, the edge curvature rule still holds if the gradient space is considered to be "cut" into two half-spaces at (l/M) v and its two infinite ends "sutured" together. This alters the concept of order along the line through (l/k,0). Due to the cut, B "is to the left of' A is considered to be true under any cyclic permutation of the ordering: B, A, cut. Thus, A cut B is valid, but B cut A is not. (Under orthography no suturing is needed.) The ambiguity of labeling the gradient points in the example given in Figure l-2 is now resolvable. For example, consider the rightmost convex edge, located L units from the image origin. It still obeys the edge curvature rule, given that the gradient space is cut at L's interpretation plane's dual, (l/L,O). Under the suturing, B (which has a positive gradient greater than l/L) "is to the left of" A (which has a negative gradient), since the value of the cut lies between them. This is the case: A cut B. f \ I 2’ \ edge nsw gradient space Figure l-4: The repair of the edge curvature rule under perspective. These relations can be more easily illustrated on the Gaussian sphere. As seen in cross-section at the plane y=0 in Figure l-4, the panned gradient space corresponds to a partial revolution of the Gaussian sphere about the y axis. Planes of infinite p with respect to the line of sight are located in the standard coordinate system at wb0) l Wrap-around is more easily seen: standard gradient points of higher p value than l/k are really the remapping of slopes "past' negative infinity. These values are actually the projection of part of the "front" surface of the sphere. In any case, the form of the edge curvature rule under perspective is non-trivial, and would complicate the perspective extension of existing orthographic algorithms employing it. 2. N DIFFICUraT To EXJ!JSND A second example further demonstrates that it is often difficult to predict the perspective form of a relation from its orthographic counterpart. Additionally, it indicates that what has been simplified under orthography may yet have a simple form under perspective. Consider the intersection of several planes at a common vertex. Under orthography, the dual of the interpretation planes of each edge is important: this dual constrains, by the left-right rule, the choice of gradients for the pair of planes that flank it. HOwever, each dual is infinite, since under orthography the interpretation plane is prallel to the line of sight. There is therefore no apparent relationship among the duals. However, under perspective, the duals of the interpretation planes are usually finite; they can be determined as if the edges were the vanishing 11 lines of the interpretation planes. (The exception occurs when the edge passes through the origin, implying an infinite gradient.) Further, as Figure 2-l shows, the duals all lie on a common line in the gradient space. If the vertex is at picture point P, the line is simply G.P=l. Thus, this line is perpendicular to the vertex-picture origin direction, and at a distance of l/l IPI I from the gradient space origin. 9 ? Figure 2-l: Interpretation plane duals, under perspective, are finite and well-behaved. The explanation is not difficult. All the interpretation planes must be parallel to the line of sight to the vertex: this is the vector T=(⌧,yrl ) l Planes parallel to a given vector are those planes perpendicular to the plane whose normal & the vector. Thus, interpretation planes are of the form N.-O, or equivalently, G.el. This line of duals can simplify the problem of assigning the proper configurations of gradients for the planes AB, AC and BC. However, like many things under perspective, it has disturbingly non- linear behavior under translation. 3. m PERSPEC!I!~?E L,.UIE DRAW- ARE DIFFIaJJ~T Thus, there are at least two problems associated with extending known orthographic results in line drawing analysis to perspective. The first is that the gradients of planes meeting at a comon edge no longer obey a simple relation. Related planes lie on a line through the dual point of their interpretation plane (the poi:::- wwv I appropriately rotated). This is mr e cumbersome than having them lie on a gradient space line perpendicular to the image line. Further, as the edge mOves in the image, this conunon pair:* moves, too, in a non-linear way; under orthography, there are no changes. Secondly, concavity and convexity have no simple gradient space analogues, either. The gradient space must be viewed with respect to the "cuts" created by line-of-sight effects. Again, as an edge moves, the cut moves, and the relationships are also affected; not so in orthography. Although the line-of-duals relationship discllssed above can be an aid to line-drawing ;-:r .:k sic ti--an aid unnecessary under orthography--it stiii r;:pears that any extension of existing line- based techniques to the perspective case will be difficult and inelegant. Critical here is the strong translation dependence of perspective. In Part, this give,- further reason to encourage a trend to more cdrface-based approaches in image understanding work. [II [21 131 REFERENCES Kanade, T. A Theory of Origami world. Arttfggigcial IntelliseBa 13(3):279-311, May, . Kender, J. R. Share from Texture. PhD thesis, Computer Science Department, Carnegie-Mellon University, 1980. Mackworth, A. K. Interpreting Pictures of Polyhedral Scenes. ArtificiaA m 4(2):121-137, Summer, 1973. 12
1982
27
120
TRACKING KNOWN THREE-DIMENSIONAL OBJECTS* Donald B. Gennery Robotics and Teleoperator Group Jet Propulsion Laboratory Pasadena, California 91109 ABSTRACT A method of visually tracking a known three-dimensional object is described. Predicted object position and orientation extrapolated from previous tracking data are used to find known features in one or more pictures. The measured image positions of the features are used to adjust the estimates of object position, orientation, velocity, and angular velocity in three dimensions, Filtering over time is included as an integral part of the adjustment, so that the filtering both smooths as appropriate to the measurements and allows stereo depth information to be obtained from multiple cameras taking pictures of a moving object at different times. I II’iiODUCI’ION Previous work in visual tracking of moving objects has dealt mostly with two-dimensional scenes Cl, 2, 31, with labelled objects [41, or with restricted domains in which only partial spatial information is extracted I51. This paper describes a method of tracking a known solid object for which an accurate object model is available, determining its three-dimensional position and orientation rapidly as it moves, by using natural features on the object. Only the portion of the tracking problem concerning locking onto an object and tracking it when given initial approximate data is discussed here. The acquisition portion of the problem is currently being worked on and will be described in a later paper. Since the tracking proper portion discussed here has approximate information available from the acquisition data or from previous tracking data, it can quickly find the expected features in the pictures, and it can be optimized to use these features to produce high accuracy, good coasting through times of poor data, and optimum combining of information obtained at different times. (An earlier, similar method lacking many of the features described here was previously reported 161 .) mode 1 The current method uses a general object consi sting of planar surf aces . The f eatures * The research described in this paper was carried out by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. found in the pictures are the brightness edges formed by the intersection of the planar faces of the object, caused by differences in illumination on the different faces. By comparing the positions of the actual features in the pictures to their predicted postions, discrepancies are generated that are used in a least-squares adjustment (based on a linearization using partial derivatives) to refine the current estimates of object position and orientation. Filtering over time is included in the adjustment to further reduce error by smoothing (including different amounts of smoothing automatically in different spatial directions as required by the accuracy of the data), to obtain velocities for prediction, and to enable information obtained at different times to be combined optimally . Thus stereo depth information is obtained when more than one camera is used, even though individual feartures are not tracked or matched between pictures, and even if the different cameras take pictures at different times, When only one camera is used, the approximate distance to the object is still determined, because of its known size. In order to avoid the singularity in the Euler angle representation (and for other reasons mentioned later), the current orientation of the object is represented by quaternions, and the incremental ad j us tment to orientation is represented by an infinitesimal rotation vector. (Corben and Stehle 171 provide a discussion of qua ternions, and Goldstein 181 provides a discussion of infinitesimal rotation vectors.) The tracking program works in a loop with the following major steps: prediction of the object position and orientation for the time at which a picture is taken by extrapolating from the previously adjusted data (or from acquisition data when starting), detection of features by projecting into the picture to find the actual features and to measure their image positions relative to the predictions; and the use of the resulting data to adjust the position, orientation, and their time derivatives so that the best estimates for the time of the picture are obtained. These steps will be described briefly in the following sections. A more de tailed description will appear in a paper pub1 ished elsewhere. II PREDICTION The prediction of position and orientation is based upon the.assumption of random acceleration 13 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. and angular acceleration (that is, a constant power spectrum to frequencies considerably higher than the rate of picture taking) . Since random acceleration imp1 ies constant expected velocity, the predicted position itself is obtained simply by adding to the position estimate from the previous adjustment the product of the previous adjusted velocity times the elapsed time since the previous picture, for each of the three dimensions. Similarly, the predicted orientation is obtained by rotating from the previous adjusted orientation as if the previous adjusted angular velocity vector applied constantly over the the elapsed time interval. (This orientation extrapolation is particularly simple when quaternions are used.) The predicted velocity and angular velocity are simply equal to the previous adjusted values. However, these predicted values must have appropriate weight in the adjustment, and, since the weight matrix should be the inverse of the covariance matrix (see, for example, Mikhail 1911, the computation of the covariance matrix of the predicted data will now be discussed. necessary background information on matrix algebra.) The larger are the values of a and a, the larger will be the uncertainty in the predicted values as indicated by 3, and thus the less smoothing over time will be produced in the adjustment. In practice, the above matrix multiplications are multiplied out so that the actual computations are expressed in terms of 3-by-3 matrices. This is computationally faster since A is so sparse. However, for greater accuracy two additional effects are included in the implemented program. First, the effect on orientation of uncertainty in the previous orientation and angular velocity will be influenced by the rotation that has occured during the time r. This causes some modification of the A matrix. Second, additional terms involving a and a are added to 3 to reflect the influence that the random acceleration during the just elapsed time interval z has on position and orientation. These refinements will be described in another paper. The covar i ante matrix of the prev ions adjusted data is denoted by S. This is a 12-by-12 matrix, since there are three components of position, three components of incremental rotation, three components of velocity, and three components of angular velocity (assumed to be arranged in that order in S). To a first approximation, the covariance matrix ‘s of the predicted data can be obtained by adding to S terms to represent the additional uncertainty caused by the extrapolation. These must include both of the following: terms to increase the uncertainty in position and orientation caused by uncertainty in the velocity and angular velocity that were used to do the extrapolation, and terms to increase the uncertainty in velocity and angular velocity caused by the random acceleration and angular acceleration occur ing over the extrapolation interval. The former effect can be produced by using the following 12-by-12 transformation matrix: I 0 ZI 0 0 I 0 ZI i A = 0 0 I 0 0 0 0 I where I is the 3-by-3 identity matrix and z is the elapsed time interval of the extrapolation. Then the covariance matrix can be transformed by this matrix, and additional terms can be added for the latter effect, as follows: III DETECTION @ FEATURES Once the predicted object position and orientation are available for a picture, the vertices in the object model that are predicted to be visible (with a margin of safety) are projected into the picture by using the known camera model [lOI. The lines in the picture that correspond to edges of the object are computed by connecting the appropriate projected vertices. Analytical partial derivatives of the projected quantities with respect to the object position vector and object incremental infinitesimal rotation vector are also computed. Brightness edges are searched for near the positions of the predicted lines. The brightness edges elements are detected by a modified Sobel operator ( including thresholding and thinning), which we have available both in software form and in special hardware that operates at the video rate 1111. The program only looks for edge elements every three pixels along the line, since the Sobel operator is three pixels wide. For each of these positions it searches approximately perpendicularly to the line. Currently it accepts the nearest edge element to the predicted line, if it is within five pixels. However, a more elaborate method has been devised. This new method varies the extent of the search according to the accuracy of the predicted data, accepts all edge elements within the search width, and gives the edge elements variable weight ‘s Y ASAT + 0 0 0 0 0 az1 0 0 0 0 0 0 0 0 0 alz1 according to their agreement with the predicted line and their intensity. This method will be described in a later paper. In principle, the position of each detected edge element could be used directly in the adjustment described in the next section. The where a and a are the assumed values of the power observed quantity ei would be the perpendicular spectra of acceleration and angular acceleration, distance from the predicted line to the detected respectively, and the superscript T denotes the edge element, the l-by-6 partial derivative matrix matrix transpose. (Mikhail 191 provides the Bi would be the partial derivatives of -ei with respect to the three components of object position and three components of incremental object rotation, and the weight Wi of the observation would be the reciprocal of the square of its standard deviation (accuracy). (Currently, this standard deviation is a given quantity and is assumed to be the same for all edge elements.) weight matrix ‘s-l. (Giving the predicted values weight in the solution produces the filtering action, similar to a Kalman filter, because of the memory of previous measurements contained in the predicted information.) Therefore, the adjustment including the information contained in the predicted values in principle could be obtained as follows: However, for computational efficiency the program uses a mathematically equivalent two-step process. First, a corrected line is computed by a least-squares fit to the perpendicular discrepancies from the predicted line. In effect, the quantities obtained are the perpendicular corrections to the predicted line at its two end vertices, which form the 2-by-1 matrix Ei, and the corresponding 2-by-2 weight matrix Wi. Bi is then the 2-by-6 partial derivative matrix of -Ei with respect to the object position and incremental orientation. Second, these quantities for each predicted line are used in the adjustment described in the next section. -1 S = + 3-l [ii] = [F] + $1 However, using the inefficient and may present since the two matri ces to be inverted are 12-by-12 and may be nearly singular. If ‘s is partitioned above equation is numerical probl ems, into 6-by-6 matrices as follows, IV ADJUSTMENT Now the nature of the adjustment to position and orientation will be discussed. If no filtering were desired, a weighted least squares solution could be done, ignoring the predicted values except as initial approximations to be corrected. The standard way of doing this [9, 121 the following mathematically equivalent form in terms of 6-by-6 matrices and i-vectors can be produced by means of some matrix manipulation: sPP = (1 + ~ppNrlSpp is as follows: N = B;WiBi SW = (1 + sppNrlBm SW = ‘SW - g&N(I + ‘sPPN)-l~PV i C = B;WiEi P = f’ + sppc i v = ‘3 + s&c P = H + N-lC Not only is this form more efficient where Bi is the matrix of partial derivatives of the ith set of observed quantities with respect to the parameters being adjusted, Wi is the weight computationally, but the matrix to be (I + ‘sPPN) is guaranteed to be nonsingular, inverted because both ‘s, and N are non-negative definite. matrix of the ith set of observed quantities, Ei is a vector made up of the ith set of observed quantities, P is the vector of parameters being adjusted, and fr is the initial approximation to P. The covariance matrix of P, which indicates the accuracy of the adjustment, -1 is then N For the case at hand, P is 6-by-1 and is composid of the components of position and incremental orientation, N is 6-by-6, and C is 6-by-l. The meanings of Ei’ wj,# and Bi for this case were described in the previous section. The first three elements of P from the form . The the new adj last three rotation vector orientation. This usted position vector of the elements form an incr emental above object used could ion matrix , since the as expl primary orientation in the implemented tracker is in terms of qua ternions, it is used instead to update the to correct the object be used directly to update the rotat 181) , but ained by Goldstein representation of quaternion that represents orientation, and the rotation matrix is computed from that. This method also makes convenient the normalization to prevent accumulation of numerical error. (The relationshin between quaternions and rotations is described bE The velocity and angular velocity are included in the adjustment by considering twelve adjusted parameters consisting of the six-vectors P and V, where V is composed of the three components of velocity and the three components of angular velocity. The measurements which produce N and C above contribute no information directly to v. However, the predicted values fr and v can be considered to be additional measurements directly on P and V with covariance matrix ‘s, and thus Corben and Stehle 171 .) The covariance matrix S of th+e adjusted data is formed by assembling Spp, SPV, %vs 58. and SVV into a 12-by-12 matrix, similarly to 15 Figure 1. Digitized picture from left camera. Figure 2. Results from Figure 1. Figure 3. camera. Results from next picture from right Figure 4. Results from next picture from left camera. Figure 5. object. Results from right camera with obscuring Figure 6. later. Results from left camera five pictures 16 V RESULTS Figures 1, 2, 3, and 4 show the tracker in action. The object being tracked (a hexagonal prism) is 203 mm tall and is moving upwards at about 16 mm/set. Pictures from two cameras were taken alternately. The values used for the acceleration parameters were a = 1 mm2/sec3 and a = 0.0001 radian2/sec3. The assumed standard deviation of the edge measurements was one pixel. The software version of the edge detector was used. The program, which runs on a General Automation SPC-16/ 85 computer, was able to process each picture in this example in 1.6 seconds, so that the complete loop through both cameras required 3.2 seconds. (When the hardware edge detector is used, the time per picture in a case such as this is only 0.5 second.) Figure 1 shows the raw digitized image corresponding to Figure 2. For successive pictures from the left, right, and left cameras, respectively, Figures 2. 3, and 4 show the following information. In a window that the program puts around the predicted object for applying the software edge detector, the raw digitized picture has been replaced by the detected brightness edges (showing as faint lines). (With the hardware edge detector the entire picture would be so replaced.) Superimposed on this are the predicted lines corresponding to edges of the object (showing as brighter lines). The bright dots are the edge elements which were used in the adjustment, (These may be somewhat obscured in the figures when they lie directly on the predicted lines.) The program is able to tolerate a moderate amount of missing and spurious edges. This is because it looks for edges only near their expected positions, because the typical abundance of edges produces considerable overdetermination in the adjustment , and because of the smoothing produced by the filtering. Figures 5 and 6 (similar to Figures 2, 3, and 4) show an example of an obscuring object passing in front of the tracked object without causing loss of track. Figure 5 is from the right camera, and Figure 6 is from the left camera five pictures later (so that there are two pictures from the left camera and two from the right camera between these in time that/are not shown) . ACKNOWLEDGMENTS The programming of the tracker was done primarily by Eric Saund, with portions by Doug Varney and Bob Cunningham. Bob Cunningham assisted in conducting the tracking experiments. REFERENCES c21 131 141 r51 161 [71 CSI [91 [lOI r111 Cl21 W. N. Martin and J. K. Aggarwal, “Dynamic Scene Analysis,” Computer Grauhics and Imane Processing 7 (19781, pp. 356-374. - A. L. Gilbert, M. II. Giles, G. 1. Flachs, R. B. Rogers, and Y. H. TJ, “A Real-Time Video Tracking System,” IEEE Transactions on Pattern Analvsis and Machine Intellinen= PAYI- (19801, pp. 47-56. H. F. L. Pinkney, “Theory and Development of an On-Line 30 Hz Video Photogrammetry System for Real-Time 3-Dimensional Control, ” Proceedings of the ISP SvmDo s ium PhoD- s InXtrv, on Stockholm, August 1978. J. W. Roach and J. B. Aggarwal, “Computer Tracking of Objects Moving in Space,” IEEE Transactions on Pattern Analysis and Machine -- Intelligence PAHI- (19791, pp. 127-135. E. Saund, D. B. Gennery, and R. T. Cunningham, “Visual Tracking in Stereo,” Joint Automatic Control Conference, sponsored by ASME, University of Virginia, June 1981. H. C. Corben and P. Stehle, Classical -- Mechanics (Second Edition), Wiley, 1960. H. Goldstein, Classical Mechanics (Second Edition), Addison-Wesley, 1980. E. M. Mikhail (with contributions by F. Ackermann), Observations and Least Squares, Harper and Row, 1976. Y. Yakimovsky and R. T. Cunningham, “A System for Extracting Three-Dimensional Measurements from a Stereo Pair of TV Cameras,” Computer Grauhics and Image Processing 7 (19781, pp. 195-210. R. Eskenaz i and J. M. Wilf, “Low-Leve 1 Processing for Real-Time Image Analysis,” Jet Propulsion Laboratory Report 79-79. D. B. Gennery, “Mode 11 ing the Environment of an Exploring Vehicle by Means of Stereo Vision,” AIM-339, STAN-CS-80-805, Computer Science Dept., Stanford University, 1980. 111 H.-H. Nagel, “Analysis Techniques for Image Sequences,” Fourth International Joint Conference on Pattern Recognition, Tokyo, November 1978, pp. 186-211. 17
1982
28
121
A MODEL BASED VISION SYSTEM FOR RECOGNITION OF MACHINE PARTS Katsushi Ikeuchi Yoshiaki Shirai Computer Vision Section Electrotechnical Laboratory Umezono-l-l-4, Sakura-mura Niihari-gun, Ibaraki, 305 Japan This paper describes a vision system based on the photometric stereo system and a model generator called GEOMAP. The photometric stereo system obtains a needle map from shading information of an observed image. An extended Gaussian image of the needle map reduces possible attitudes of an object relative to the viewer. A model-generator called GEOMAP generates a needle map which would be observed from the viewer direction determined from EGI. Comparing the needle map by the GEOMAP with the needle map by the photometric stereo System makes final recognition of the object. attitude. Comparison be map and the generated tween the observed needle needle map gives a final ABSTRACT I INTRODUCTION Machine vision at the low level often provides a collection of local surface normals [1,2,31. This collection of local surface normals is refered to as a needle map [Il. In particular, the photometric stereo system [4,5,6,71 can provide a needle map from shading information. A global recognition of an object requires interpreting these local representations. Each local representation is obtained at a particular Point in the viewer-centered coordinate system. On the other hand, an object model is usually expressed in a coordinate system based on an object Center and natural axis of the object [8,9]. These two coordinate systems are independent of each other. Thus, we have to recover the object-centered coordinate system from local representations on the viewer-centered coordinate system. We propose to use the extended Gaussian image (EGI) [1,10,11,12,131 as a constraint to reduce possible attitudes of an object relative to the viewer. Once an attitude is determined, we can easily compare a representation on the viewer-centered coordinate system with a representation on the object-centered coordinate system. GEOMAP [l4] can generate a needle map which would be observed from the viewer direction determined from EGI. Fig. 1 denotes a total schema of our vision system. Photometric stereo system provides a needle map from three brightness arrays. An extended Gaussian image from the needle map determines an object's attitude. GEOMAP generates a needle map based on the obtained decision on what kind of an object generated @ needle map &obtained ti recognition result shading info Fig. 1 Total schema is observed. attitude II PHOTOMETRIC STEREO SYSTEM -- Photometric stereo system obtains needle maps from shading information [4,5,6,71. The orientation of patches on the surface of an object can determined from multiple images taken with different illumination, but frbm the same viewing The method is refered to as photometric direction. stereo method [4]. The photometric stereo can be implemented using a lookup table based on numerical inversion of reflectance map[1,61. A needle map is obtained from lookup table. three brightness images using the The brightness distribution of a sphere under the lighting system provides [5,61. Since we know surface sphere at each image point, a reflectance map orientation of a we can obtain a refl at .ectance map which denotes each surface orientation. the brightness value We use a Lambertian diffuselly sphere to obtain a reflectance map for surfaces under the point source reflecting illumination. Each image point provides three brightness values (11,12,13). Then they are normalized as Ei=Ii/(I1+12+13), i=1,2,3 to cancel the effect of albedo and shading effect of a lens. This operation provides three reflectance maps. Three reflectance maps are numerically inverted into a lookup table which gives surface orientation from thrke brightness values. 18 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. Fig 2 shows an example of needle maps obtained from real objects (a cylinder, an octagonal prism, and a square prism) using the system. A TV camera looks down the objects from the ceiling. The objects lie having their long axes along the x coordinate of the image plane. This looking-down-tv method gives a good constraint on the possible viewer directions as discussed in the next section. Since these three objects have rectangular silhouette in this attitude, three objects cannot be discriminated using the so-called silhouette matching. (a> Cylinder (b) Octagonal prism CC> Square prism Fig. 2 Obtained needle maps from the photometric stereo III DETERMINING ATTITUDE OF AN OBJECT ~- --- There are three degrees of freedom in an object's attitude relative to the viewer. Although a brute force technique, such as search through the space of possible attitudes, can be applicable to matching of EGIs directly for determining attitude of an object, we will reduce this search space using constraints. A. Constraints on attitudes of an object --- -- We propose three kinds of constraints to reduce possible attitudes of an object. (From now on, we assume that image plane is perpendicular to the line of sight. In other words, the image formation system is modelled using the orthographic projection.) Knowing the gravity direction reduces possible attitudes. Two conditions must be satisfied to apply this constraint to the observed data: 1) Stable attitudes of models must be known. If we have a solid model, we can determine the position of gravity center. In particular, GEOMAP has a solid model, and can determine the position of gravity center. Then, we can obtain stable attitudes of the model from the position of gravity center. We denote the opposite direction to the gravity at the stable attitude as t&e stability direct- nf the mom. 2) The angle between the gravity direction and the line of sight must be detected. It is interesting that a human being also has semicircular canals to detect the direction. Let us assume that we can detect the angle between the gravity direction and the line of sight from a level on a TV camera as rc? . The gravity direction is parallel to the stability direction from the definition. Also the line of sight has always the same direction from the assumption of orthographic projection. Thus the angle between the stability direction and the line of sight has the same angle 13 . In other words, the possible line of sight direction locates on a circle on the Gaussian sphere whose center corresponds to the stability direction and whose radius corresponds to the angle between the gravity direction and the line of sight,Q . In particular, when TV camera observes the object in the gravity direction as in our case, the circle reduces to the center point. For example, when a square prism is looked down from the ceiling of the observer room, only six points corresponding to prism's faces on the Gaussian sphere are necessary to be considered as candidate directions of line of sight. This constraint also works under the partial observations. The ratio of area projected onto the image plane against the original surface area constrains possible line of sight direction [ill. For example, observing an ellipsoid from the long-axis direction gives a smaller projected area than looking at the same ellipsoid from the direction perpendicular to the axis. Yet the surface area is the same. We also must find the rotation of an object around the line of sight. We use the principal EGI inertia direction from the line of sight direction [ill. We will use the 2-D axis for simplicity of calculation, where the calculations are also done only if surface patch is visible from the direction. The constraints mentioned above can be extended to treat partial observations. Partial observation occurs due to characteristics of photometric stereo system. The photometric stereo system can determine surface orientation at the area where all three light sources project their light directly. Our photometric stereo system can determine surface orientation within the area where the zenith angle of the surface normal is less than 60 degrees. The ratio and the axis direction are calculated at this area. B. Determining attitude using EGI -- Comparing EGI from observed data with EGI 19 from prototypes determines the attitude of an object. An EGI of an object denotes surface-normal distribution of the object [1,10,11,12,13]. We implement EGI on the semi-regular geodesic dome from a two frequency dodecahedron [Ill. EGI at each attitude is calculated using GEOMAP. EGI is stored as a two dimensional lookup table. Each column corresponds to one attitude, while each raw contains two kinds of information. (1) projected area vs. surface area. Comparing this registered ratio with an observed ratio determines whether the EGI distribution is necessary to be check at this attitude or not. (2) local EGI distribution An EGI observed at this attitude is stored in an one-dimensional table based on the geodesic-dome-tessellation. The EGI in the table is aligned so that the least inertia axis coincides with the x-coordinate on the image plane. Thus, the observed EGI should be rotated to have the least inertia axis as the direction before comparison. Likelihood is calculated by determining whether every observed EGI cell contains similar amount of mass at the corresponding neighboring cells in the table [Ill. The program determines the attitude obtaining the highest likelihood as the observed attitude. Table 1 shows a EGI comparison results. A needle map of a cylinder (Fig.2(a)) gives an EGI as shown in Fig.3(a) and 0.8711 as the ratio. If the needle map is assumed to come from a cylinder, the ratio constraint reduces 240 possible attitudes into 2 possible attitudes: direction 22 and 138. See Fig. 4. At 22 direction, EGI comparison gives 0.3437 as likelihood value. If the same needle map is assumed to come from a octagonal prisms, 22, 136 and 138 are possible attitudes. Under this assumption, direction 138 gives the highest likelihood among the possible attitudes. At the EGI comparison the observed EGI is too different to be assumed as an EGI from the square prism. The needle map in Fig. 3(c) comes from the planar surface of the prism. All objects have planar surfaces. The direction 45 corresponds to the direction observing the base of each prism. See Fig. 4(b). I’ \ ‘, \ (a) Cylinder (b) Octagonal prism (c) Square prism Fig.3 Obtained EGIs from the needle maps c 1 d (a) Cylinder(l38) (b) Cylinder(45) Octago. p.(45) Square P.(45) Square P.(138) ----. t’_l ----_ i:-: -1 (c) Octagonal P. (138) (d) Square P.(22) Fig. 4 Prototypes and their EGIs Table 1 EGI comparison results I I CYLJNDER I OCTAGONAL SQUARE I’R IS?.! PRISM CY LTNDER 22: 0.3305 22: 0.0 22: 0.0 GEOMAP, developed by Kimura and Hosaka at ETL [141, is a program package to synthesize 3D object. GEOMAP maintains 3D information of an object in the program. GEOMAP has the following characteristics; (0) GEOMAP has primitive objects. (I) GEOMAP can move and rotate an object. (2) GEOMAP can unify an object with another object into a new object. (3) GEOMAP can make constraint endowment. (4) GEOMAP can make a 2D projection of an object in any direction. (5) GEOMAP can store/retrieve data of each object to/from files. (6) Maclisp at ETL can call GEOMAP directly with commands to GEOMAP. 20 Our vision system is assumed to treat mechanical parts. We have precise information at each candidate object. Thus, we can make 3D information of each candidate in GEOMAP using movement operations, rotation operations, and unification operations from primitive objects. Once the EGI system determines the attitude of each candidate, GEOMAP can make a needle map of each candidate in the direction. Needle map comparison has two relative merits to a depth map, which would be obtained by integrating the needle map, (1) (2) the comparison. The photometric stereo system does not assume that an object surface is continuous. Thus, we cannot integrate a needle map to obtain a depth map. Even if we assume that an observed surface is continuous, we cannot obtain a depth map in case that needles are obtained at multiple regions. Each needle may contain a measurement error. Thus, errors accumulate during integration. Reliability of a depth map degenerates towards the end of integration. GEOMAP generates a needle map so as to have same gravity position, to have the least inertia direction as the x-coordinate axis, and to have the same window size as the real image. Final matching is made as i (fi protovf Ii{ 1 --' ;bserve12+(g;roto-gpbservej2 2 E= window-area After calculating this E for all candidates, the candidate obtaining the highest value will be determined as the object observed at that time as shown in Table 2. Table 2 Needle-comparison results with the generated needle map identifies the observed object. The experiment shows that extending the detectable area of a photometric stereo system not only increases accuracy in final matching process but also constrains possible attitudes more precisely. ACKNOWLEDGMENTS The authors would like to extend their sincere appreciation to Prof. B.K.P. Horn of MIT for his helpful discussions. Thanks goes to Prof. F. Kimura of Univ.of Tokyo for permission of using GEOMAP. GEOMAP is converted from VAX-FORTRAN to TOPS20-FORTRAN. The experimental part of this paper was done using Mac-Lisp installed on Dee 2020-2060 at ETL. [71 - -. ^--~ _-_- -_ Fi T ---- --.--.~----y---_. .- --7---- _ , ‘2 Cl1 [21 [31 II41 Ii51 E63 81 91 101 111 V CONCLUSION Cl21 We propose a vision system based on the photometric stereo system and GEOMAP. The [I31 photometric stereo system provides a needle map from shading information. After constraints from EGI reduces possible attitudes of an object in [I41 space, GEOMAP generates a needle map of an object at the attitude. Comparing the observed needle map REFERENCES Horn, B.K.P., "SEQUINS and QUILLS -- Representation for Surface Topography." AI - Memo No. -- 536, AI Lab., MIT, 1979. Barrow, H.G. and Tenenbaum, J.M., l'Recovering Intrinsic Scene Characteristics from Images." Hanson, A.R. & Riseman, E.M., (Ed.) Computer ------ Vision Systems, Academic Press, New York, 1978. Kender, J.R., "Shape from Texture," Ph.D. Thesis, Computer Science Dept., CMU, 198Cr--~ -- Woodham, R.J., "Photometric Stereo: A Reflectance Map Technique for Determining Surface Orientation from Image Intensity," in Proc. SPIE, Vol. 155, 1978. ~ -- Ikeuchi, K., "Determining Surface Orientations of Specular Surfaces by Using the Photometric Stereo Method," IEEE Trans. PAMI, Vol.PAMI-2, No.6, Nov. 1981, 661-66? Silver, W.M., "Determining Shape and Reflectance Using Multiple Images," MS Thesis, EECS, MIT, June 1980. Coleman, E.N., and Jain, R.,"Shape form Shading for Surfaces with Texture and Specularity," in Proc. IJCAI-81, Vancouver, Aug. ,981, 652-657. Binford, T.O., "Visual Perception by Computer." IEEE Conf. on Systems and Control, Miami, P - --- -~- Dee 1971. Marr, D. and Nishihara, K.H., "Representation and Recognition of the Spatial Organization of Three-dimensional Shape." Proc. R. sot . -- Lond., B:200, (1978). Smith, D.A., "Using Enhanced Spherical Images for Object Representation,' AI Memo No. - - -- 530, AI Lab., MIT, 1979. Ikeuchi,K., "Recognition of Objects Using Extended Gaussian Image'" in Proc. IJCAI-81, Vancouver, Aug. ,981, 595-600. Ballard, D.H., and Sabbah, D.,"On Shape," in Proc. IJCAI-81, Vancouver, Aug. 1981, 607-612. Bajcsy, R., "Three-dimensional Scene Analysis," in Proc. ICPR-80, Miani Beach, Dec. 1980, 1064-1074. Hosaka, M, Kimura, F. and Kakishita, N=, "A Unified Method for Processing Polyhedra," in Proc. IFIP, 1974, 768-772. 21
1982
29
122
REMOVING RESTRICTIONS IN THE RELATIONAL DATA BASE MODEL: the relational data base model the L2 mad& presented here. AN APPLICATION OF PROBLEM SOLVING TECHNIQUES Laurent Sikl6ssy Dept. of Information Eng. University of Illinois at Chicago Chicago, IL 60680 (USA) ABSTRACT The principal restrictions previously placed on have been removed in We extend the model to include rz& (i.e. un- known and non-relevant) values (even in keys), repe- titions of tuples, functional dependencies, a very rich set of constraints and information originating from several sources. The programmed problem-solver ALICE is utilized to manipulate the constraints and to simplify rela- tions: to answer a query, ALICE selects both the tuples which m@ti answer the query upon appropriate substitutions for u&noLVn values. I INTRODUCTION We assume that the reader is familiar with the relational data base model [Codd, 1975; Date, 1981; Ullman, 19801. known in values. a data base; they will be called unknown In practice, some values may not be In other cases, to avoid a multiplicity of simi- lar data bases. non-relevant values are introduced. For example, the value of the attribute "Name of Spouse" ' is non-relevant for a person who is not married. Unless we permit such non-relevant values, we would be obliged to divide a data base on perscns into two similar data bases: one for married per- sons: a second one for unmarried persons. Unknown and non-relevant values are combined in the term null values. Data bases with null values have been studied by - a variety of authors [Codd, 1979; Grant, 1977; Imielinski et al, 1981; Lipski, 1979, 1981; Siklgssx 1981; Vassiliou, 1979, 19801, among others. General- ly these authors have placed a variety of restric- tions on the use of null values. Some authors do not permit non-relevant values. Others do not per- mit null values in a key. We shall show that it is possible, with the help of the problem-solving program ALICE [Laurigre, 19761 to lift the restrictions previously advocated. In addition, our extended model will admit function al dependencies and a rich set of constraints, since ALICE can manipulate these dependencies and constraints. If queried, ALICE can indeed return both the tuples which satisfy the query, and also those tu- ples which might satisfy the query upon appropriate specifications of unknown values. Jean-Louis Laurisre Institut de Programmation Universitg Paris VI 4, Place Jusieu 75230 Paris Cedex 05 (FRANCE) viously imposed upon null values, and will argue that their justification is tenuous at best. Then, through a set of examples, we shall describe some extensions of data bases with null values that ALICE can process. 2. ARE RESTRICTIONS ON NULL VALUES NECESSARY? Two principal restrictions on null values have been proposed: no null values are permitted in a key, and there should be no duplication of the same tuple. The first restriction is forcefully stated, for example by Codd (1979, p. 400): Rule 1 (Entity Integrity): No primary key value of a base relation is allowed to be null or to have a null component. Insisting upon a fully defined key for each tu- ple automatically prohibits the existence of two (or more) tuples having identical keys. Such tuples can be merged (or recognized as inconsistent). With the restrictions that we have just menticn- ed, it is not possible to represent the information provided by observer 1: "I saw somebody (but I don't know her name) with only one arm, wearing green trousers", since the key (the person's name) is unknown, and hence null. Nor is there a way to represent the information provided by observer 2: "I saw the same person as observer 1, and I noticed that she had hazel eyes. But I don't know her name either". If we assume that a data base is grown piece- meal, as a result of the contribution of a variety of observers (or informants), then we must remove the above restrictions. A tuple can be viewed as the result of an observation by an observer. Obser- vations are then often incomplete. The question becomes: Can one still compute? That is, is it possible to answer queries from such unrestricted data bases? We shall now show, through a series of examples, that indeed we can still compute. THE L2 MODEL: AN EXTENDED RELATIONAL We shall first discuss the restrictions pre- DATA BASE MODEL From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. We shall now describe our extended relational model, which will be referred to as the L2 model for short. Although the model can be described formally and precisely, we shall attempt a more intuitive and explanatory description here. A relation in the L2 model consists of two- dimensional tables. The columns are labelled as attributes; the rows will also be referred to as tuples. We can think initially of each row as re- presenting a set of partial observations by an informer on some individual (or entity). The sys- tem ALICE will try to recognize that several rows refer to the same individual. The following values of attributes in a tuple are presently accepted in the L2 model, and can be processed by ALICE: la/-Scalar values, such as "blue", ?'6gr', "potato". 2"/-Ranges of scalar values, such as blue or green or red], meaning the value is blue, green or red; or such as (22..35], meaning the value is between the values of 22 and 35, with ends included. 3'/-Non-relevant values, denoted NR. An attri- bute may have several non-relevant values. For example, the attribute "Name of Spouse" may have the value "NRl" to mean "never married", or NR2" meaning "Spouse is Dead". ALICE must be given the specific semantics of non-relevant values by means of formal definitions. 4"/-Unknown values. The attribute for a tuple has a value which is unknown in the obser- vation represented by the tuple. Unknown values are represented by: ?. If two unknown values are the same, as for example in "I saw the same person as you did", then the unknown is indexed, for example as: ?5. The semantics are clear: ?i = ?i; while it is not generally known whether ?i = ?j, if i # j; or whether ?i = ?. 5"/-Functional dependencies, from one set of attributes to another set of attributes in a relation. 6'/-Keys, a special case of functional depen- dencies. 7'/-A variety of constraints. ALICE was designed to manipulate con-- straints, and has proved very successful. Examples of constraints on the relation R, with attributes A and B, could be: -"for each value of A, there may be at - most two values of B". (Application: a society where a person may have at most two spouses!) -'the values of A must be at least twice larger than the valuesof B". Since ALICE has been described extensively elsewhere, we shall not describe it here. The ex- amples will illustrate her capabilities. (The ori- ginal ALICE required no modifications for the pre- sent applications). 4. RELATED WORK Few works in the area of extensions of relation- al data bases (to null values or ranges, for exam- ple) include running programs. An exception is Lipski (1979). Lipski describes a theory and a pro gram to manipulate data bases where values are with- in a range. We note that the L2 model includes such ranges and, in addition, null values, functional de pendencies and a rich set of constraints. ALICE has been run on the two problems mention- ed by Lipski, which we shall repeat here: Problem 1: The Database is: OBJECT AGE DEPT i/ HIREYEAR S+!LARY (O,+) &2,3,4,5) 70..80 (0 ,-tm> x1 60..70 60..70 {1,2,3,4,5] {1,2,3,4,5] 70..75 70..75 (10000) (10000) x2 52..56 52..56 (21 (21 72..76 72..76 (0,20000) (0,20000) x3 30 30 (31 (31 70..71 70..71 (0 ,*I (0 ,*I x4 co,+) co,+) (2,3] (2,3] 70..74 70..74 (22000) (22000) x5 32 32 141 141 75 75 co,*) co,*) The query is: [Dept # in (2,3)]/\ [ [Salary < 1OOOO)n Hireyear 72]v [Age > 50]A[Salary < ISOOOO]]. ALICE determines (in 1.5 seconds on a 370/168 computer), that x3 and x4 satisfy the query: that x2 might satisty the query; and that xl and x5 do not satisfy the query. These results agree with Lipski's. Problem 2: The Database is: I NAME I SALARY 1 STATUS / # CHILDREN1 DEPT # / The query is: [(Dept # = 3)A (Name = Jones)v (Name # Smith)A (Dept # = 2)]A [(Salary < 15OOO)d (# Children > 2)A (Salary in (10000, 2OOOO))A (Status = Married)]. ALICE determines (in 1 second) that "?" might answer the query, while Brown does not. Again, these results agree with Lipski's. See the exact formulation of these problems in the ALICE language in the annex. 5. EXAMPLES OF MANIPULATIONS IN THE L2 MODEL AND SOLUTIONS BY ALICE We shall briefly describe a problem illustrating the representational capabilities of the L2 model and the data base reduction capabilities of ALICE*. When given a data base, ALICE first reduces the A The example is somewhat artificial, so that it remains simple to describe. 311 database by eliminating redundant tuples, and find- ing values or ranges for unknowns. The database is: The constraints are: Tuple 1 2 3 4 5 6 7 8 9 10 A (Ov4) (044) (0~6) ?7 2 ?ll 1 ?19 3 ?17 ---- B ?l ?3 ?5#? ?8 ?9 bl bl ?llc bl bl C D c3 ?2 c4 ?4 c3 ?6 c5 5 c4 ?lO Cl ?12 Cl ?12 ?15 ?20 ?16 :1,2,3 c2 ?18 (I) Keys: A d (B,C) (II) F.D.: C -t D III) Not more than 2 C's for 1 B (IV) At least two different tuples with C = cl (V) 0 < A < 10; A integer. (VI) 0 4 D d 5; D integer. VII) A >/ 2D. Notice null values in the key A (in tuples 4,6, 8 and 10) and again in the B component of the key (B,C) (in tuples 1 to 5 and 8). sup& b LA actimly unknown. Indexed unknowns are used to indicate equality of unknowns, as in 6.D = 7.D, or explicity to indi- cate inequality. The constraint on line 3:"?5#?1" means that: 3.B is equal to ?5 but must be differ- ent from ?l. Solution by ALICE (4 seconds): .By constraint (I): 3.A = 6. Tuples 1 and 2 give indeed that l.A and 2.A form together the set 10,4L Since A is a key, if 3.A = 0 then tuple 3 must be either tuple 1 or tuple 2. But both cases are impossible due to: ?5 # ?1 and c3 # c4. .?4 = ?lO = (0~1) by constraints (II) ZZ (VII). .?2 = ?6 by (II). .?7 = 10 by (V) and (VII); ?6 < 3 by (VII). .Tuple 6 = tuple 7 by (I), therefore: ?11 = 1 and ?12 = 0 by (VII). .?16 = c2 by (I) and (III) with 7.A # 9.A together with 7.C = cl and 10.C = c2. .?15 = cl by (IV) and ?14 # bl by (II). .Tuple 9 = tuple 10 by (I). therefore: ?17 = 3; ?18 = 1 by (VI) and (VII). .bl # ?l, ?3, ?5, ?8, ?9 by (III). .?19 # 0,1,2,3,4,5, 10 by (I). .?20 = 0 by (II). The reduced database becomes: ITuple #I A ( B / C (Od4) ?l#bl c3 (044) ?3#bl c4 [61 ?5#?1 c3 1101 ?8#bl c5 [21 ?9#bl c4 [ll bl cl 8 ?19 ?14#bl cl bl/ c2 D ?2=(0,1,2) ?4=(0,1) ?2=(0,1,2) [5] ?4=(0,1) [Ol 101 [II tuple 6 removed tuple 10 t- removed The following questions are answered by ALICE in a total of 3 seconds: Question Answer Ql: Tuples with D=5? Al: Surely:4; Maybe: 8. 42: Tuples with B=bl? A2: Surely:7,9; Maybe: none. 43: Tuples with B=b2? A3: Surely: none; Maybe: 1,2,3,4,5,8. 94: Tuples with B=b2 and D=3? A4: Surely: none; Maybe 8. Q5: Tuples with D=O? A5: Surely: 7; Maybe: 1,2,3,5,8. Q6: Tuples with A=O? A6: Surely: none; Maybe:1,2. (Actually A6 is more precise; Surely: either 1 or 2 but not both. Tuple 8 may not answer Query 6 by -- . _ one of the last conclusions of the previous reduc- tion process). The reader can verify, by solving the problem, that some of the deductions, if not very difficult, are not trivial either. NOTE: For each problem, ALICE receives the formal definitions of the keys, constraints and queries. She does not know the application a priori but in- terprets these formal data on the data base. An optimised "frozen" version of the system for this kind of application would probably reduce the run- ning times significantly. 6. CONCLUSIONS The L2 data base model is a natural extension of the relational data base model. It allows null values in any field, including keys; the repetition of tuples, and therefore information from different and possibly coupled sources: and a great variety of constraints. Therefore, we can now model situa- tions that were forbidden in previously proposed extensions. The problem-solver ALICE reduces a given data- base, finds contradictions in it, and answers que- ries. The performance of ALICE in other domains indicates that constraints that are significantly more powerful than those previously allowed, or even those shown in this article, can be success- fully accepted in a relational database. We are pursuing our study of the L2 model, in particular in the areas of non-relevant nuils (which were not given in our examples here); secon- dary storage handling; and the incorporation of significantly more challenging constraints. Acknowledgments Some of this work was pursued while one of the authors (LS) was Visiting Professor at the Univer- sity of Paris IX Dauphine. The support of Paris IX is gratefully acknowledged. ANNEX The statements in ALICE of two of the problems, together with the run of one, follow: Problem 1: Give the constraints: -- range of age: ia = 0; sa = 100. range of department: id = 1; sd = 5. range of hireyear: ih = 70; sh = 80. range of salary: is = 0; ss = 90. 312 Given the sets: x = [l, l] DA = [ia, sa] DD = [id, sd] DH = [ih, sh] DS = [is, ss] Find the functions: AGE from X to DA DEP from X to DD HYE from X G DH SAL from X to DS With the constraint (question): Dep(X) > 2 a7_d Dep(X) < 3 and (Sal(X) > 10 or Hye(X) 6 12) and (Age(X) 6 50 G Sal(X) > 15) - All these formulas are written, for the system ALICE, in polish notation. The full problem speci- fications follow: SoI CST IA SA ID SD IH SH IS SS ENS X = INT I I DA = INT IA SA DD = INT ID SD DH = INT IH SH 3s = INT IS ss TRO FON AGE >-> X DA DEP >-> X DD HYE >-> X DH SAL >-> X DS Ai7E ET '= DEP 1 2<=DEPl 3 ou <= HYGl 12' SAL 1 10 OU <= AGE 1 50' SAL 1 15 FIN 0,100,1,5,70,80,0,90 The other problems are given in the same manner: objects and relations are formally defined; key or (FD) correspond to partial or total onto functions. Problem 2 gives for instance: SO1 ENS X = INT 1 1 DD = INT 2 2 DN = OBJ DS = INT 18 18 DM = OBJ DC = INT 2 2 TRO FON DEP >-> X DD NAM >-> X DN STA >-> X DM SAL >-> X DS CHI >-> X DC AVE OU ET = DEP 1 3 = NAM 1 JON ET w= NAM 1 SMI = DEP l 2 ou < SAL 1 15 > CHI 1 2 = NAM 1 BRO ou > SAL 1 10 < SAL 1 20 = STA 1 MAR FIN BRO JON SMI MAR FIN 30,30,3,3,70,71,0,100 Comments: the set DN of the possible names is read as data: {Brown,Jones, Smith], the set DM is {married]. The form of the constraints is similar. The run is: call .11(m) PHIO*** internal FINDIC NIFI= 0 translation NNT= 5 NSG= 5 NSD= 7 NK= 108 PE= 13 PF= 1 PCG= 1 PLG= 121 PSG= 1 8 0 12 410 1400 400 120 120 1500 7000 200 1 CTTE 1 OU ET = 0 + MNS 3 DEP 1 = 0 + MNS 2 NAM1 ET # 0 +MNS 3 NAMl= 0 +MNS 2 DEPl .,-impossibility of the constraintz CTTE 2 NON Sal(x)< 15 or children(x)>2 - NSOL= 0; **** AUCUNE SOLUTION AU PROBLEME PHIO*** IERR= O;wno solution NIFI= READY CAzrnber of uni:kzations1°20 time TIME-03:55:39 PM. CPU-00:OO:Ol SERVICE-5060 SESSION-00:05:29 NOVEMBER 20 [ll 121 131 [41 L51 [61 171 [81 [13 1 7. REFERENCES Codd, E.F. "Understanding Relations." FDT Bulletin of ACM-SIGMOD 7 (3-4), 1975, pp. 23-28. Codd, E.F., 'Extending the Data Base Relatioml Model to Capture More Meaning." ACM Trans. Data Base Syst., 4,4 1979, pp 397-434. Date, J.C.: An Introduction to Database Sys- tems (3rd ed.), Addison Wesley, 1981. Gallaire, H. and J.Minker, eds: Logic and Data Bases. Plenum Press (1978). Grant, J. "Null Values in a Relational Data Base." Inf. Proc. Letters, 6,5, 1977, pp. -- 156-157. Imielinski, T. and W. Lipski: On representing incomplete information in a relational database. Proc. VLDB , 1981, pp. 388-397. Lauri>re, J.L.: A language and a program for stating and solving combinatorial problems. Artificial Intelligence 10 (1978) PP. 29-127. Lipski, W.: On semantic issues connected with incomplete information databases. ACM Trans. Database Syst. 4, 1979, pp. 262-296. SiklGssy, L: Efficient query evaluation in relational databases with missing values. Inf. Proc. Letters, l&4/5, 1981, pp. 160-163. Ullman, J.D.: Principles of Database Systems, Computer Science Press, 1980. Vassiliou, Y.: Null values in database manage- ment: A denotational semantics approach, Proc ACM SIGMOD 1979 Int. Conf. on Manag.of data, 1979, pp. 162-169. Vassiliou, Y.: Functional dependencies and iw complete information, Proc. VLDB, 1980, pp. 260-269. Wang, H: Toward mechanical mathematics, IBM, J. Res. Dev., 4,1, 1960. 313
1982
3
123
LOCAL COMPUTATION OF SHAPE Alex Paul Pentlsnd Artificial Intelligence Center SRI International, 333 Ravenswood Ave, Menlo Park, CA. 94025 ABSTRACT Local, parallel analysis of shading in an image may be used to infer surface shape without o priori knowledge about the viewed scene. The following has been proven: Given a reason- able assumption about surface smoothness, local analysis of the second derivative of image intensity allows the image-plane com- ponent of surface orientation to be determined exactly, and a maximum-likelihood estimate of the remaining depth component of orientation to be made. An algorithm has been developed to compute surface orientation, in parallel, for each image point without knowledge of scene characteristics. This algorithm has been evaluated on both natural and synthesized images, and produces a good estimate of shape. I. Introduction A spatially restricted analysis of a single image is logically the first stage of any visual system. This analysis might be relatively simple, e.g., measuring image intensity, or it might be quite complex. This first stage of analysis is especially important because it determines the information that is available to the remainder of the visual system, and, therefore, determines the requirements for the remainder of the system. If the first stage of analysis produces a rich description of the world, then the remainder of the visual system will be much simpler than if it had to deal directly with all of the ambiguities of the image. Determining as much about the world as is possible is therefore important at this first stage of processing. If a visual system l Support for the final stages of this research was provided by Army Research 05ce contract DAAG2479-C-0210. A substantial portion of this research was accomplished at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory’s research is provided in part by the Advanced Research Project Agency of the Department of Defence under 05ce of Naval Research contract N00014-75-C-0643 and in part by National Science Foundation Grant MCS77-07569. can possibly calculate shape from local image information in direct, bottom-up fashion then it should take full advantage that possibility. When we examine a small neighborhood around a point an image, usually all we find in the neighborhood is small changes in shading (changes in image intensity). Finding a contour within the neighborhood of a particular image point is unusual. Thus if we are to investigate what we may learn about surface shape from local examination of an image, we must concern ourselves with shading. Thus this paper examines the question: how may surface shape be recovered from an unfamiliar image by using local analysis of shading? II. Previous Work Horn and his colleagues (e.g., [l], [2]) have developed several numerical integration schemes for using image intensity to solve for object shape given D priori knowledge of (1) the distribution of illumination, (2) the bidirectional reflectance function of the object’s surface, and (3) the surface orientation along an initial curve on the object’s surface. These shape-from-shading tech- niques may be useful in situations for which there is su5cient information known a priori about the image, such as in a fac- tory setting in which the illumination and the surface reflectance function is known beforehand. However, because Horn’s shape-from-shading theory sumes a priori knowledge of the scene, it fails to answer the ques- tion of bow to determine shape in an unfamiliar image. Further, none of these techniques satisfy our requirement of using only local image information; Horn’s shape-from-shading techniques function by propagating constraint from boundary conditions (such as provided by smooth occluding contours) over the sur- face whose shape is to be estimated. Thus further analysis required. III. The Estimation Of Surface Orientation The problem of estimating surface shape from local infor- mation is, essentially, tbe problem of determining the unknown 22 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. surface normal, N, from image measurements of image intensity, I, or its derivatives. Solving for the unknowns in any system of equations requires having more measurements than unknowns. Because the surface normal N has two degrees of freedom, we require least two measurements at each image point to obtain a solution. Therefore, we cannot determine the surface normal using image intensity alone, because it provides only one measure- ment per image point. More measurements per point can be obtained from the first, second, or higher derivatives of image intensity; however, while each additional derivative does give more measurements per point, it also brings an even larger num- ber of unknowns into the equations. Therefore, we cannot solve for N simply by using derivatives to obtain more measurements per point; in order to solve for N locally, we must make some simplifying assumptions so that the number of unknowns is not larger that the number of measurements available to us. A. The Tilt Of The Surface When we observe a smooth surface, we obtain a strong impression of the tilt of the surface - that is, which direction the surface slants away from us. Because we have a strong impression of the image-plane component of surface orientation, it might be possible to compute the surface tilt directly. If we could determine the tilt of the surface exactly, then only one degree of freedom (slant, the depth-component of surface orientation) would be left undetermined in N. How we might go about estimating the tilt? Imagine that we could observe the lines of curvature on a surface directly. These lines of curvature would look like the lines drawn in Figure 1. If we were looking straight down on a surface with no twist, the lines of curvature would appear perpendicular, as in Figure 1 (a). As we tilted the surface to one side, the lines of curvature would appear progressively more spread, as in Figure 1 (b) and (c). Different directions of tilt would cause spreading in different directions. We can not observe lines of curvature on the surface directly, of course, but we can observe the interaction of surface curvature with the illuminant in the second derivatives of image intensity. The second derivative of image intensity has three components: Lz and I,,, the %urvaturen of image intensity along the z and y axes, and Izv, the ‘spread* of those curvatures. Just as with the spread of the lines of curvature, the direction in which this spread term is the greatest is also the direction of the surface tilt. The direction in which the spread is the greatest is also the direction along which @I is the greatest, and so the following proposition: Proposition (Tilt of the Surface) Given an image of a smooth, homogeneous recond-order rurface with Lz#Iy,, zy I #0, then the tilt of the surface is the image direction in which the second derivative of image inten- sity, 81, is greatest. A B C - -_ . _.._ Figure 1 The manner in whrch Image curvature %preads” indicates the tilt of the surface. This may he understood by imagining that we could observe the lines of curvature on a surface directly. These lines of curvature would look just like the lines drawn in this figure. If we were looking straight down on a surface with no twist, the lines of curvature would appear perpendicular, as in (a). As we tilted the surface off to one side, the lines of curvature would appear progressively more spread, as in (b) and (c). Different directions of tilt cause spreading in different directions. Thus one of the two components of surface orientation, the tilt, may be determined from the second derivative of image in- tensity directly without knowledge of the illuminant direction. This leaves only the slant of the surface to he determined. This proposition assumes that the surface we are observing is a second-order surface, that is, a surface that may be described by the Monge patch p = uel + ve:! + f(u, v)ef, where f(u, v) is of the form*01u2 + ozv ‘. The approximation of an arbitrary surface by such a surface typically causes errors on the order of 63, where 6 is the spacing between observed points on the surface. Thus if ten p0int.s are observed across a surface, the maximum error incurred by this approximation is on the order of l/lOOOth of tbe width of the surface. The largest errors occur for “twisting” surfaces, those with ljpUVjj > > 0. For such surfaces, the error can he on the order of l/lOOth of the width of the surface [3]. Thus the assumption that we are observing such a second-order surface does not introduce much error. B. The Slant Of The Surface Pentland [4] has proven that while the tilt of the surface may be exacctly determined, the slant of the surface cannot be completely disentangled from the curvature of the surface. We may still make an unbiased estimate of tbe surface slant, as developed in the following propositions. Proposition (Normalized Lsplacian Of Image Intensity) Given an image of a smooth, homogeneous second-order surface, then VI -=- I K2, z7;2 - tc?, where K,, is the surface curvature along the rurface tilt direction and n, is the surface curvature in the or- thogonal direction, t‘~ is the t component of the surface *Linear and constant terms in f may be accounted for by appropriately positioning the coordinate axes. 23 normal (which is equal to the arc cooie of the surface slant) What this proposition shows is that V21/1 is a function only of the squared curvatures of the surface, nz and n?,, and the foreshortening. Note that the effects of the illuminant and the surface albedo do not appear in this quantity. The range of relationships between the surface curvatures and the foreshortening for any particular observed value of V21/1 is limited. Therefore, if we were given that the magnitude of the surface curvature had a particular a priori distribution, say a uniform distribution, then for any observed value of V*1/1, we could make a maximum likelihood estimate of the foreshort ening. Because the foreshortening is proportional to zE2, we then have an eArnate of the slant of the surface, i.e., COS-~(ZN). This leads to the following proposition. Proposition (Estimation Of Slant) Assuming a uniform distribution of surface curvature the msximum- likelihood estimate of ZN (the t component of the our- face normal, equal to the arccosine of the slant of the surface) is /I V21 I xi where a: ia the variance of the distribution of surface curvatures. Pentland [4] and Bruss [S] demonstrate that at least one degree of freedom will remain undetermined by the shading information. The tilt proposition showed that one of the two parameters of surface orientation can be determined exactly, leaving only the slant undetermined. This proposition gives a maximum likelihood estimate of the slant, which by definition is the minimum-variance unbiased estimate. Therefore, the &ant and the tilt propositions together constitute the 6errt e&mate of surface that it is theoretically possible to make from local ehading information. Note that neither the slant estimate nor the tilt estimate require knowledge of the illuminant direction. IV. Evaluation On the basis of this theory, an algorithm for estimating the surface shape (the “shape algorithm”) was implemented on one of M.I.T.‘s LISP machines.*This algorithm has been tested on both synt,hetic and natural images. Using synthet,ic images allows the level of performance of the shape algorithm to be checked under ideal conditions, whereas the use of natural images allows the performance of the shape algorithm to be evaluated under the more varied, complex and noisy conditions found in natural scenes. Some examples of the application of this shape algorithm on both synthetic and natural scenes are presented here. *The algorithm is a straightforward implementation of the cal- culations in the slant and tilt propositions above. Figure 2 Evaluation on synthetic images. (A) Artificial images of a sphere and an ellipsoidal shape used to test the surface shape estima- tion algorithm, (B) side view of relief maps showing the true shape. (C) Relief maps showing the estimate of surface shape made by the shape algorithm for the sphere and ellipsoidal shape images. Compare these relief maps to those of 4 (B). A. Synthetic Images The shape algorithm was tested using the syntheticly generated images which appear in Figure 2 (A). Figure 2 (B) shows the side view of a relief map for this surface, showing the surface shape which was used to generate this image. The shape algorithm produces estimates of the surface orientation; it was found, however, that displays of the es- timated surface orientation do not allow an observer to ade- quatcly evaluat,e the performance of the algorithm. Therefore, for purposes of displaying the performance of the algorithm, the shape algorithms’ estimates of surface orientation were in- tegrated to produce a relief map of the surface. These relief maps were found to give observers an adequate impression of the estimated surface shape, and so they are the output shown for the examples presented in this paper even though integration is not part of the shape algorithm per ue. The relief map which results from integrating the shape algorithm’s estimate of surface orientation is shown in Figure 2 24 (C). Comparing the relief map which results from integrating the estimate of surface orientation to the original “truen relief map in Figure 2 (B) h s ows that the shape algorithm can attain a very high level of accuracy. For a 200 X 200 pixel image of a sphere, using convolutions with 21 X 21 pixel masks to calculate 81 and V21, the correct shape was recovered to within 0.01 %. It is important to remember that these shape were calculated in parallel from purely local image measures, without any a priori knowledge of scene characteristics. B. Natural Images The shape algorithm has also been tested on several natural images, and two such examples will be presented here. Figure 3 (A) shows the digitized image of a log, together with the relief map generated from the shape algorithms estimates of surface orientation. Figure 3 (B) h s ows the digitized image of a rock, together wit.h the relief map generated from the shape algorithms estimates of surface orientation. The relief maps in Figure 3 (A) and 3 (B) correspond as closely as can be determined to the actual shapes of these two objects. The reader should compare his impression of shape from the digitized images with the relief maps of Figure 3 (A) and 3 (B). The shape algorithm has also been successfully employed on other natural images, and on electron microscope images. In each case the estimates of surface shape produced correspond closely to the actual surface shape. These examples demonstrate that the parallel, local computation described in this paper, which does not require any a priori knowledge of the scene, is sufficient to obt*ain a useful estimate of surface shape in natural images. REFERENCES [l] Horn, B.K.P.,“Shape From Shading: A Method for Obtaining the Shape of a Smooth Opaque Object from One View”, A.I. Technical Report 79, Project MAC, Massachusetts Institute of Technology, (1970) [2] Ikeuchi, K. and Horn, B.K.P.,“NumericaI Shape from Shading and Occluding Boundaries,” Artificial Intelligence, Special Issue on Computer Vision, 15 (1981) [3] Issacson, E., *and Keller, H. B., “Analysis Of Numerical Methods,” (John Wiley and Sons, New York, New York, 1966) [4] Pentland, A.P.,“The Visual Inference Of Shape: Computation From Local Features,” Ph.D. Thesis, Psychology Department, Massachusetts Institute of Technology (1982) [S] Bruss, A., “Shape From Shading And Bounding Contour,” Ph.D. Thesis, Dept. Elec. Engr.& Comp. Sci., Massachusetts Institute of Technology, (1981). 1 Figure 3 Evaluation on natural images. (A) The digitized log image __ _._ _ . . and the relief map generated by the shape algorithm for that. image. (El) The digitized rock image and the relief map generated by the shape algorithm for that image. 25
1982
30
124
EDGE DETECTION IN OPTICAL FLOW FIELDS William B. Thompson Kathleen M. Mutch Valdis Berzins Optical flow is ABSTRACT potentially Computer Science Department 136 Lind Hall University of Minnesota Minneapolis, MN 55455 valuable as a source of spatial information. Current techniques provide flow fields which are noisy and sparse, mak ing the recovery of spatial proper ties diffi- cul t at best. This paper describes a technique for locating discontinuities in optical flow, which typically correspond to object boundaries. A simple blurring interpolator is used to smooth out noise and produce denser fields. Discontinui- ties are found by locating the vector field equiv&lent of zero crossings in the Laplacian of a scalar field. The technique is illustrated by applying it to realistic vector fields which are both noisy and sparse. 1. INTRODUCTION - The character concept of optical flow izing the two dimensiona - vector 1 changes fields in time varying imagery - has received increased attention recently as a source of spatial information. With few exceptions (e.g. [l]), research has focussed on two non-overlapping tasks. Several techniques have been developed for obtaining vector fields from image sequences ~31. At the same time, methods have been derived for computing spatial properties from optical flow fields [4,5,6,7,8]. These methods presume that the flew field is known with high accuracy over a dense sampling of points in the field of view. In a realistic environment, current techniques provide vector fields which tire far tram ideal, being bo th noisy and sparse. A need exists to develop techniques for estimating spatial information which will function on realis- tic vector fields. This paper examines one such technique for locating discontinuities in optical flew. Such ciiscontinuities typically correspond to object boundaries. Errors may &rise from many sources determination of vector fields [9]. may be magnified and propagated in the Sensor noise in certain instances. Some techniques make assumptions, such as linear intensity gradients or locally constant vector fields, which frequently may be violated. This work was supported by the National Sci- ence Foundation under Grant MCS-81-05215, and by a Louise Dosdall Fellowship from the University of Minnesota. In addition, some matching techniques never attempt to match more than a very sparse set of image points [lo]. Spatial interpretation tech- niques requiring accurate input will probably not function well when provided with sparse and error prone vector fieids. As Bruss and Horn noted [ll], techniques which rely upon local derivatives of a vector field (e.g. [4]) will only magnify any existing error. If the original vector fields contain signif icant error, such a technique will not function properly. We need to develop tech- niques which will be relatively insensitive and adaptive to noisy and sparse vector field prob- lems. In realistic imagery there is an additional reference frame problem. Eoth camera and object translations and rotations are possible. The result is ambiguous vector fields, where a &iven vector pattern could represent many different spatial/motion combinations. Some vector field analysis techniques resolve this problem by limit- ing the type of allowable motion to, for example, object translation only [12], or camera transla- tion only [13]. Frequently these limitations may be violated in important or interesting image sequences. It is desirable for analysis tech- niques to be insensitive to the problems arising from unconstrained motion. One way to deal with the noise problem is to use global, rather than local, analysis. Motion parameters can be determined by some form of glo- bal optimization dependent on values over the whole field of view (e.g. [ll]). These motion parameters can then be used in a more local analysis to determine spatial properties. This technique is useful when the observer is moving relative to a static environment. When some sig- nificant portion of the scene is moving or when multiple moving objects are viewed by a stationary observer, the analysis breaks down because no sin- gle set of motion parameters is valid over the whole field of view. In such situations, it is desirable for an operator to be localized so that it is within object boundaries and at the same time perform some averaging over an area to reduce noise problems. Since object sizes are unpredict- able, it may be useful to apply an operator of variable size, producing an analysis with several degrees of locality. The technique for edge detection described in this paw-r attempts to deal with the problems of 26 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. noisy, sparse vector fields and arbitrary motion. A simple blurring interpolator is used to smooth out noise and produce denser fields. In general, arbitrary motion produces smoothly varying vector fields within an object boundary, and a discon- tinuity in the vector field at object edges. Cur technique relies upon locating these discontinui- ties, and so involves no restrictions on object motion. Two examples are provided which demon- strate the performance of the technique on sparse, noisy vector fields obtained from real imagery. 2. METHOD -- A vector field can be described by the two component quantities of direction and magnitude. When the motion is limited to camera translation, vector direction will vary slowly across the entire image. At edges, discontinuities wili occur in magnitude only, and for most analytical purposes the vector field can be reduced to a scalar field [13,14]. When arbitrary motion ic allowed, the discontinuity may occur in either magnitude or direction, or both. The vector field in this case cannot be simplified to a scalar field. Our analysis of discontinuity detection for two-dimensional vector fields is similar to that of Marr and Hildreth in the sczilar case [15]. Discontinuity in a discrete image field means that the variability on either side of an edge is much less than the variability across the edge. If the edge is approximately linear and variability along the edge is suitably constrained, then the search for discontinuities can be decomposed into a separate analysis of the scalar fields correspond- ing to the x and y components of the optical flow. (These constraints are generalizations of the "linear variability" assumption used in [151.) A discontinuity in either component corresponds to a discontinuity in the origin&i field. Smoothing is performed to reduce the effects of noise and to serve as a bandpass filter on width of the edges. In the case of sparse vector fields, smoothing serves an additional function of interpolation. A' gaussian smoothing kernei is used, since it is optimal with respect to the condition of simul- taneous locality in the space and frequency domains. A discontinuity in either of the smoothed scalar fields will result in a peak in the first derivative of the field, and a zero crossing in the second derivative. The Laplacian is a more convenient operator than the seccind derivative, however, since it is invariant with respect to coordinate rotations, allowing a one pass search for edges with arbitrary orientations. For scalar fields that vary linearly parallel to straight edges, the Laplacian is equal to the second derivative taken in the direction of greatest variation. The Laplacian operator applied to a smoothed function has the additional advantage of being closely approximated by the difference of two gaussian functions [15]. ing Actua 1 discontinui the componentwise ti es are found by recombin- Lapiacians into a vector field and then searching for the vector field ana- l% of a scalar zero crossing. At an edge, there will be a zero crossing in at least one component of this difference field, and a value of zero in the other component. Both components may have a zero crossing. In either case , adjacent vectors will reverse direction when an edge lies between them. 3. - 1MPLEMENTATION A sparse vector field was obtained from two adjacent images in a sequence using a point match- ing technique [lo]. The vector field was separated into its component x and y sc&iar fields. The fields were blurred by two different gaussian ktrneis with a ratio of standard dtvia- tions on the order of 1:1.6. The results wart: subtracted to form an estimate of the Lapiacian of the smoothed fieids [15]. The differrnced com- ponent fields were next recombined, and the resulting vector- field was searched for reversals in adjacent vectors. Two thresholds were enforced in the search for vector reversals. Theta, the angle separating adjacent vectors, was allowed to vary in a range around the ideal of 180 degrees. A lower threshold kiss placed on the combined lengths of adjacent vectors to ensure that the slope of edges was significant. Figures 1 and 2 shah the results when this technique was applied in two different motion situations. No gray scale information was used to assist the vector field technique in locating edges. In figure 1, the vectors on the elephant and tiger are of approximately equal magnitude, but differ in direction. In figure 2, the vectors on the two animals are of different magnitude but lie in approximately the same direction. The edges ovt~;h~d on tne vector fitlds indicate that the method works relatively well in both cases. 4. DISCUSSION - The technique described in this paper illus- trates one aspect of a mere gcnersl concept of local referfnce frames. The larger of the LkO smoothing kernels acts to form a local average flow. Subtracting a less smoothed field results in information about local deviations away from this average. As we have shown, a direction&i reversal in this deviation field corresponds to a discontinuity in opticai flow. Other surface shape properties such as direction of orientation and sign of curvature also have well defined sig- natures in the deviation field. As long as only qualitative information is being estimated, a purely loch1 anaiysis is often sufficient. In such cases, there is no requirement for global motion or camera models . Another important spatial property concerning the asymmetry of occlusion edges is directly available from the deviation field and computed edge locations. While edges are often described as a boundary between two image regions, occlusion edges are only the boundary of one or the other of the corresponding surfaces. Determining which side of a discontinuity in disparity corresponds 27 to the occluding surface provides information about both scene structure and relative depth. The key to resolving this edge ambiguity is to note that over time, the edge will have the same disparity as the surface to which it corresponds. Furthermore, only the motion com- ponents perpendicular to the edge need be con- sidered, since only they lead to asymmetric changes in the appearance of the boundary. This observation results in a simple computational te5t. Once an edge has been found, a highly smoothed field may be used to get an estimate of the expected image position of the edge in a sub- sequent frame. Disparity estimates in this smoothed field will be affected by regions on either side of the edge dnd hence will in fact be an average. of the two SC tual values. The reai edge will bt: translating over the image with a speed either faster or slower than the estimate. Thus, at the "rxpectcd" Edge location, the devia- tion vectors will not in fact exhibit a direc- tional reversal. However, the projection of the deviation vector at this point onto the normal across the edge points toward the occluded sur- face. Figure 3 presents one possible motion case to which this anaiysis is applied. (a) (b) Figure 3. (a) Disparity field obtained from ini- ~- tial image pair, with motion of regions A and B indicated by vectors. (b) lkvktion field from a subsequent image pair, when A is the occluding surface. lhe expected edge is shown as a dotted line, the true edge as a solid line. The devia- tion vector at the expected edge loc;tion points toward the occluded surface. As with gray scale edges, it is useful to anaiyze optical flow over a range of resolutions. The deviation fields corresponding to different blurring kernels give information about shape pro- perties at different scales. The use of deviation fields is particularly suited to systems which estimate disparities between image frames using coarse to fine matching [16,14,17]. The results of matching at coarser resolutions can be used to establish local reference frames against which results of finer matches can be compared. Thus, looking across leveis of resolution can provide an important source of information about shape. BIBLIOGRAPHY [II H.-H. Nagel, "Representation of moving rigid objects based on visual observations," Computer, v. 14, no. 8, pp. 29-39, 1981. [21 Special Issue on Computer Analysis of Time Varying Images, Computer, v. 14, no. 8, August, 1981. 131 B.K.P. Horn and B. Schunck, "Determining optical flow," Artificial Intelligence, vol. 17, pp. 185-203, 1981. [41 H.C. Longuet-Higgins and K. Prazdny, "The interpretation of a moving retinal image," Proc. R. Sot. Lond., v. B 208, pp. 385-397, ---- 1980. [51 K. Prazdny, "Egomotion and relative depth map from optical flow," Biol. Cybernetics, v. 36, pp. ST-lC2, 1980. [6] S. Ullman, The Interpretation of Visual Motion, Cambridge: MIT Press, 19797 [71 K- Nakayama and J.M. Loomis, "Optical velocity patterns, velocity sensitive neurons, and space perception," Perception, v. 3, pp. 63-80, 1974. [al D.D. Hoffman, "Inferring shape from motion fields," KIT AI Memo No. ---- 592, December, 1980. [91 J.K. Kearney and W.B. Thompson, "Gradient based Estimation of disparity," Proc. IEEE Conf. on Pattern Recognition and Image Processzg, June, 1982. -___ [lo] S.T. Barnard and W.B. Thompson, "Disparity analysis of images," IEEE Trans. Pattern - - ~ Analysis an d Machine Inteiligence, vol. PAMI-2, pp. 33,July 1980. [ll] A.R. bruss and B.K.P. Horn, "Passive Navigation," MIT AI Memo No. 645, November, ---- 1981. [12] R. Jain, "Extraction of motion information from peripheral prOCeSSeS,M IEEE Trans. - ~ Pattern Analysis and Machine Intelligence, -~ m-3, no. 5, pp. 489-503, lS81. 1131 W.F. Clocksin, "Perception of surface slant and edge labels optical flow: a computational approach," Pcrception, v. 9, pp. 253-269, 1980. [14] W.E.L. Grimson, From Images to Surfaces, ~ - Cambridge: The MIT Press, 1981. 1151 D. Marr and E. Hildreth, "Theory of edge detection," Proc. R. Sot. Lond., v. B 207, - --- pp. 187-215, 1980. [16] D. Marr and T. Poggio, "Cooperative computation of stereo disparity," Science, vol. 194, pp. 283-287, Oct. 15, 1976. [17] H.P. Moravec, "Visual mapping by a robot rover", Proc. 6th Int. Joint Conf. on - - - -- - Artificial Intelligence, pp. 598-600, August 1979. 28 (b) (b) (a) (b) (cl Figure 1. -- (a) and (b) Original image pair. (c) Vector field obtained with a point matching tech- nique. The edges overlaid on the vector field were obtained with the technique described in the paper. (cl Figure 2. ~- (a) hnd (b) Original image pair. (cj Vector field obtained with a point matching tcch- nique, with edges overlaid. 29
1982
31
125
MODELING AND USING PHYSICAL CONSTRAINTS IN SCENE ANALYSIS* M. A. Fischler, L. Quam 2; T. Barnard, R. C. Bolles&*M. Lowry, , G. Smith, and A. Witkin SRI International, Menlo Park, California 94025 ABSTRACT This paper describes the results obtained in a research program ultimately concerned with deriving a physical sketch of a scene from one or more images. Our approach involves modeling physically meaningful information that can be used to constrain the interpretation process, as well as modeling the actual scene content. In particular, we address the problems of modeling the imaging process (camera and illumination), the scene geometry (edge classification and surface reconstruction), and elements of scene content (material composition and skyline delineation). I INTRODUCTION Images are inherently ambiguous representations of the scenes they depict: images are 2-D views of 3-D space, they are single slices in time of ongoing physical and semantic processes, and the light waves from which the images are constructed convey limited information about the surfaces from which these waves are reflected. Therefore, interpretation cannot be strictly based on information contained in the image; it must involve, additionally, some combination of a priori models, constraints, and assumptions. In current machine-vision systems this additional information is usually not made explicit as part of the machine's data base, but rather resides in the human operator who chases the particular techniques and parameter settings to reflect his understanding of the scene context. This paper describes a portion of the SRI program in machine vision research that is concerned with identifying and modeling physically meaningful information that can be used to automatically constrain the interpretation process. In particular, as an adjunct to any autonomous system with a generalized competence to analyze imaged data of 3-D real-world scenes, we believe that it is necessary to explicitly model and use the following types of knowledge: (1) (2) (3) (4) (5) (f-5) Camera model and geometric constraints (location and orientation in space from which the image was acquired, vanishing points, ground plane, geometric horizon, geometric distortion). Photometric and illumination models (atmospheric and image-processing system intensity-transfer functions, location and spectrum of sources of illumination, shadows, highlights). Physical surface models (description of the 3-D geometry and physical characteristics of the visible surfaces; e-g-, orientation, depth, reflectance, material composition). Edge classification (physical nature of detected edges; e.g., occlusion edge, shadow edge, surface intersection edge, material boundary edge, surface marking edge). Delineation of the visible horizon (skyline) Semantic context (e.g., urban or rural scene, presence of roads, buildings, forests, mountains, clouds, large water bodies, etc.). In the remainder of this paper, we will describe in greater detail the nature of the above models, our research results concerning how the parameters for some of these models can be automatically derived from image data, and how the models can be used to constrain the interpretation process in such tasks as stereo compilation and image matching. If we categorize constraints according to the scope of their influence, then the work we describe is primarily concerned with global and extended constraints rather than with constraints having only a local influence. To the extent that constraints can be categorized as geometric, photometric, or semantic and scene dependent, it would appear that we have made the most progress in understanding and modeling the geometric constraints. * The research reported herein was supported by the Defense Advanced Research Projects Agency under Contract Nos. MDA903-79-C-0588 and DAAG29-79-C-0216; these contracts are monitored by the U. S. Army Engineer Topographic Laboratory and by the U. S. Army Research Office. ** Current affiliation: Fairchild Artificial Intelligence Laboratory, Palo Alto, California. From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. II CAMERA MODELS AND GEOMETRIC CONSTRAINTS The camera model describes the relationship between the imaging device and the scene; e-g-, where the camera is in the scene, where it is looking, and more specifically, the precise mapping from points in the scene to points in the image. In attempting to match two views of the same scene taken from different locations in space, the camera model provides essential information needed to contend with the projective differences between the resulting images. In the case of stereo reconstruction, where depth (the distance from the camera to a point in the scene) is determined by finding the corresponding scene point in the two images and using triangulation, the camera models (or more precisely, the relative camera model) limit the search for corresponding points to one dimension in the image via the "epipolar" constraint. The plane passing through a given scene point and the two lens centers intersects the two image planes along straight lines; thus a point in one image must lie along the corresponding (epipolar) line in the second image, and one need only search along this line, rather than the whole image to find a match. When human interaction is permissible, the camera model can be found by having the human identify a number of corresponding points in the two images and using a least-squares technique to solve for the parameters of the model [5]. If finding the corresponding points must be carried out without human intervention, then the differences in appearance of local features from the two viewpoints will cause a significant percentage of false matches to be made; under these conditions, least squares is not a reliable method for model fitting. Our approach to this problem [3] is based on a philosophy directly opposite to that of least-squares -- rather than using the full collection of matches in an attempt to -average out" errors in the model-fitting process, we randomly select the smallest number of points needed to solve for the camera model and then enlarge this set with additional correspondences that are compatible with the derived model. If the size of the enlarged compatibility set is greater than a bound determined by simple statistical arguments, the resulting point set is passed to a least-squares routine for a more precise solution. We have been able to show that as few as three correspondences are sufficient to directly solve for the camera parameters when the three-space relationships of the corresponding points are known; a recent result [13] indicates that 5 to 8 points are necessary to solve for the relative camera model parameters when three space information is not available a priori. The perspective imaging process (the formation of images by lenses) introduces global constraints that are independent of the explicit availability of a camera model; particularly important are the detection and use of "vanishing points." A set of parallel lines in 3-D space, such as the vertical edges of buildings in an urban scene, will project onto the image plane as a set of straight lines intersecting at a common point. Thus, for example, if we can locate the vertical vanishing point, we can strongly constrain the search for vertical objects such as telephone or power poles or building edges, and we can also verify conjectures about the 3-D geometric configuration of objects with straight edges by observing which vanishing points these edges pass through. The two horizontal vanishing points corresponding to the rectangular layout of urban areas, the vanishing point associated with a point of illumination [8], and the vanishing point of shadow edges projected onto a plane surface in the scene, provide additional constraints with special semantic significance. The detection of clusters of straight parallel lines by finding their vanishing points can also be used to automatically screen large amounts of imagery for the presence of man- made structures. The technique we have employed to detect potential vanishing points involves local edge detection by finding zero-crossings in the image convolved with both Gaussian and Laplacian operators [9], fitting straight line segments to the closed zero-crossing contours, and then finding clusters of intersection points of these straight lines. In order to avoid the combinatorial problem of computing intersection points for all pairs of lines, or the even more unreasonable approach of plotting the infinite extension of all detected line segments and noting those locations where they cluster, we have implemented the following technique. Consider a unit radius sphere physically positioned in space somewhere over the image plane (there are certain advantages to locating the center of the sphere at the camera focal point if this is known, in which case it becomes the Gaussian sphere [6,7], but any location is acceptable for the purpose under consideration here). Each line segment in the image plane and the center of the sphere define a plane that intersects the sphere in a great circle -- if two or more straight lines intersect at the same point on the image plane, their great circles will intersect at two common points on the surface of the sphere, and the line passing through the center of the sphere and the two intersection points on the surface of the sphere will also pass through the intersection point in the image plane. 111 EDGE CLASSIFICATION An intensity discontinuity in an image can correspond to many different physical events in the scene, some very significant for a particular purpose, and some merely confusing artifacts. For example, in matching two images taken under different lighting conditions, we would not want to use shadow edges as features; on the other hand, shadow edges are very important cues in looking for (say) thin raised objects. In stereo matching, occlusion edges are boundaries that area correlation patches should not cross (there will also be a region on the "far" side of an occlusion edge in which no matches can be found); occlusion edges also define a natural distance progression in 31 an image even in the absence of stereo information. If it is possible to assign labels to detected edges describing their physical nature, then those interpretation processes that use them can be made much more robust. We have implemented an approach to detecting and identifying both shad05 and occlusion edges, based on the follok;ing general assumptions about images of real scenes: (1) The major portion of the area in an image (at some reasonable resolution for interpretation) represents continuous surfaces. (2) Spatially separated parts of a scene are independent, and their image projections are therefore uncorrelated. (3) Nature does not conspire to fool us; if some systematic effect 1s observed that tie normally would anti.c-lpate as caused by an expected phenomena due to imaging or lighting, then it is likely that our expectations provide the correct explanation; e.g., coherence in the image reflects real coherence in the scene, rather than a coincidence of the structure and alignment of distinct scene constituents. Consider a curve overlayed on an image as representing the 1ocatLon of a potential occlusion edge in the scene. If be construct a series of curves parallel to the given one, then ce bould expect that for an occlusion edge, there bould be a high correlation between adjacent curves on both sides of the given curve, but not across this curve. That Is, on each side, the surface continuity assumption should produce the required correlation, but across the reference curve the assumption of remote parts of the scene being independent should produce a loti correlation score. In a case Ghere the reference curve overlays a shadow edge, tie tjould expect a continuous high (normalized) correlation between adjacent curves on both sides and across the reference curve, but the regression coefficients should shoti a discontinuity as be cross the reference curve. This technique is described in greater detail in [14]. Figure 1 shok;s experimental results for an occlusion edge. IV INTENSITY MODELING (and Material Classification Given that there is a reasonably consistent transform betaeen surface reflectance and image intensity, the exact nature of this transform is not required to recover rather extensive information about the geometric configuration of the scene. It is even reasonable to assume that shadows and highlights can be detected aithout more precise knowledge of the intensity mapping from surface to image; but if tie Kish to recover information about actual surface reflectance or physical composition of the scene, then the problem of intensity modeling must be addressed. Even relatively simple intensity modeling must address three issues: (1) the relationship betceen the incident and reflected light from the surface of an object in the scene as a function of the material composition and orientation of the surface; (2) the light that reaches the camera lens from sources other than the surface being viewed (e.g., light reflected from the atmosphere); and (3) the relationship between the light reaching the film surface and the Intensity value ultimately recorded in the digital image array. Our approach to intensity modeling assumes that se have no scene-specific information available to us other than the image data. We use a model of the imaging process that incorporates our knobledge of the behavior of the recording medium, the properties of atmospheric transmission, and the reflective properties of the scene materials. In particular for aerial imagery recorded on film, ce use an atmospheric model that assumes a constant amount of atmospheric reflectance independent of scene radiation, a film model that assumes a logrithmic relation bettieen incoming radiation intensity and film density, and a surface reflectance model that assumes Lambertian behavior (the reflected light is proportional to the incident light; the constant of proportionality is a function of the surface material; and the relative brightness of the surface is independent of the location of the vietier). We identify a fete regions of knoen material in a scene -- three materials are sufficient -- to calibrate our model to the particular Image. The resultant model is used to transform the given image into a nea image depicting the actual scene reflectances. Our intensity model has the form d = a*log(r+b) + c uhere d is the image intensity, r the scene reflectance, a and c parameters associated with the film process, and b is the ratio of atmospheric backscatter to scene illumination. We determine 0 and c by fitting our model to the identified (d,r) pairs. The fitting is achieved by guessing b -- Fe knot b lies in the range 0 to 1 -- applying the least squares method to the resultant linear equation to calculate a,c, and the residual sum, and adjusting b to minimize this residual sum. The resultant reasonable material reflectance image labeling and image the basis of the to be achieved on information alone. has allo&ed segmentation reflectance V SHADOW DETECTION (and Raised Object Cueing) The ability to detect and properly identify shadows is a major asset in scene analysis. For certain types of features, such as thin raised objects in a vertical aerial image, it is often the case that only the shadoti is visible. Knowledge of the sun*s location and shado& dimensions frequently allows us to recover geometric information about the 3-D structure of the objects casting the shadows, even in the absence of stereo data [8,10]; 32 but perhaps just as important, distinguishing Our experience to date, on a data base of 15 shadows from other intensity variations eliminates scenes, leads us to believe that although we can a major source of confusion in the interpretation obtain reasonably good results when the confusing process. factors mentioned above do not dominate the scene, Given an intensity discontinuity in an image, we still make local mistakes which will require we can employ the edge labeling technique described more sophisticated reasoning to eliminate; to the earlier to determine if it is a shadow edge. extent that the confusing factors become more However, some thin shadow edges are difficult to prominent, the problem can be made arbitrarily find, and if there are lots of edges, we might not hard. Figure 4 shows a typical image and the want to have to test all of them to locate the skyline delineation we have obtained for it. Prior shadows. We have developed a number of techniques work on this topic, employing considerably more for locating shadow edges directly, and will now semantic knowledge then in our approach, is describe a simple but effective method for finding contained in Sloan [ll]. the shadows cast by thin raised objects (and thus locating the objects as well). VII SURFACE MODELING Obtaining a detailed representation of the visible surfaces of the scene, as (say> a set of point arrays depicting surface orientation, depth, reflectance, material composition, etc., iS possible from even a single black and white image [12,21- A large body of work now exists on this topic, and although directly relevant to our efforts, it is not practical to attempt a discussion of this material here. There is, however, one key difference between surface modeling and the other topics we have discussed -- the extent to which the particular physical knowledge modeled constrains the analysis of other parts of the scene. In this paper we have been primarily concerned with physical models that provide global or extended constraints on the analysis; surface modeling via point arrays provides a very localized constraining influence. VI VISUAL SKYLINE DELINEATION Although not always a well defined problem, VIII CONSTRAINT-BASED STEREO COMPILATION delineation of the land-sky boundary provides important constraining information for further analysis of the image. Its very existence in an image tells us something about the location of the camera relative to the scene (i.e., that the scene is being viewed at a high-oblique angle), allows us to estimate visibility (i.e., how far we can see -- both as a function of atmospheric viewing conditions, and as a function of the scene content), provides a source of good landmarks for (autonomous) navigation, and defines the boundary beyond which the image no longer depicts portions of the scene having fixed geometric structure. In our analysis, we generally assume that we have a single right-side-up image in which a (remote) skyline is present. Confusing factors include clouds, haze, snow-covered land structures, close-in raised objects, and bright buildings or rocks that have intensity values identical to those Of the sky (a casual inspection of an image will often provide a misleading opinion about the difficulty of skyline delineation for the given case). Our initial approach to this problem was to investigate the use of slightly modified methods for linear delineation [41 and histogram partitioning based on intensity and texture measures; we employ fairly simple models of the relationship between land, sky, and cloud brightness and texture. The computational stereo paradigm encompasses many of the important task domains currently being addressed by the machine-vision research community [l]; it is also the key to an application area of significant commercial and military importance -- automated stereo compilation. Conventional approaches to stereo compilation, based on finding dense matches in a stereo image pair by area correlation, fail to provide acceptable performance in the presence of the following conditions typically encountered in mapping cultural or urban sites: widely separated views (in space or time), wide angle views, oblique views, occlusions, featureless areas, repeated or periodic structures. As an integrative focus for our research, and because of its potential to deal with the factors that cause failure in the conventional approach, we are constructing a constraint-based stereo system that encompasses many of the physical modeling techniques discussed above. It is not our intent to discuss this system here, but rather to indicate the framework in which the distinct geometric, photometric, and semantic constraints will interact; Figure 5 shows some examples of this interaction. 33 IX CONCLUDING COMMENTS When a person views a scene, he has an appreciation of where he is relative to the scene, which way iS up, the general geometric configuration of the surfaces (especially the support and barrier surfaces), and the overall semantic context of the scene. The research effort we have described is intended to provide similar information to constrain the more detailed interpretation requirements of machine vision (e-g-, such tasks as stereo compilation and image matching). REFERENCES 1. S. T. Barnard and M. A. Fischler, "Computational Stereo,*' ACM Surveys, 1982 (in press). 2. H. G. Barrow and J. M. Tenenbaum, "Recovering intrinsic scene characteristics from images," in Computer Vision Systems (A. Hanson and E. -- Riseman ed.) Academic Press, pp. 3-26 (1978). 3. M. A. Fischler and R. C. Bolles, "Random Sample Consensus: A paradigm for model fitting with applications to image analysis and automated cartography," CACM, Vol. 24(6), pp. 381-395 (June 1981). 4. M. A. Fischler, J. M. Tenenbaum, and H. C. Wolf, "Detection of roads and linear structures in low-resolution aerial imagery using a multisource knowledge integration technique," Computer Graphics and Image Processing, Vol. 15(3), pp. 201-5 (March 1981). 5. D. B. Gennery, "Least-squares stereo-camera calibration," Stanford Artificial Intelligence Project Internal Memo, Stanford University (1975). 6. D. A. Huffman, "Realizable configuration of lines in pictures of polyhedra," in Machine Intelligence (Elcock and Michie, ea.>, Edinburgh University Press, Edinhurgh, Scotland, pp. 493-509 (1977). 7. J. R. Kender, "Shape from Texture," (Ph.D. Thesis, Report No. CMU-CS-81-102) Carnegie- Mellon University, Pittsburgh, Pennsylvania (November 1980). 8. D. G. Lowe and T. 0. Binford, "The interpretation of three-dimensional structure from - image curves," IJCAI-81, pp. 613-618 (1981). 9. D. Marr and E. C. Hildreth, "Theory of edge detection," MIT AI Lab Memo 518 (1979). 10. S. Shafer and T. Kanade, "Using shadows in finding surface orientations, 1. (Report No. CMU-CS-82-100) Carnegie-Mellon University, Pittsburgh, Pennsylvania (January 1982). 11. K. R. Sloan, Jr., "World model driven recognition of natural scenes," University of Pennslyvania, Philadelphia, Pennsylvania (June 1977). 12. J. M. Tenenbaum, M. A. Fischler, and H. G. Barrow, "Scene Modeling: A structural basis for image description," Computer Graphics and Image Processing," Vol. 12(4), pp. 407-42vApril980). 13. R. Y. Tsai and T. S. Huang, "Uniqueness and estimation of three-dimensional motion parameters of rigid objects with curved surfaces," University of Illinois, Urbana, Illinois (August 1981). 14. A. Witkin, "Recovering intrinsic scene characteristics from images," SRI Project 1019, Interim Technical Report, SRI International, (September 1981 >* Menlo Figure 1 Example of an Occlusion Edge California 34 Figure 2 Detection of Thin Shadow Lines (result of line detection and integration) Figure 3 Highest Likelihood Shadow Lines Overlayed on Original Image Figure 4 Results of Skyline Delineation (a> (b) Figure 5 cc> Constraint-Based Detection and Matching of Vertical Edges (a> stereo-images-top image (b) lines found cc> vertical lines found left view, bottom image right view 35
1982
32
126
Intensity-Based Edge Classification &drew P. Witkin Fairchild Laboratory for Artificial Intelligence Research Palo Alto, CA 94304 ABSlXACT A new intensity-based approach to the classification of edges was developed and implemented. Using basic continuity and independence properties of scenes and images, signatures were deduced for each of several edge types expressed in terms of correlational properties of the image intensities in the neighborhood of the edge. This procedure’s ability to discriminate occluding contours from cast shadow boundaries was demonstrated for cases where line junction cues are absent from the image. 1. Introduction Edges in images arise from several very different kinds of scene event - occluding con- tours, discontinuities of surface orientation, material changes, cast shadows, etc. Each kind of edge contributes its own significant constraints to image interpretation1 r--81 but these constraints can- not be exploited unless edge types can be dis- tinguished from each other. Edge classification is therefore of considerable importance to image interpretation. Edge classification in idealized line drawings has been treated in terms of constraints,l junction 131 but the perfect line drawings required to identify junctions have proved difficult to obtain from natural images. Horn191 has sug- gested that the intensity profiles across edges (peak vs. step, etc.) may provide distinguishing signa- tures for some edge types. However, this technique depends on quantitative photometry, and its effectiveness has never been demonstrated for com- plex imagery. This paper describes a classification technique that, like line-junction methods, relies on structural rather than quantitative photometric properties of the image and scene, but unlike those methods, utilizes the raw image intensities in the neighbor- hood of the edge, without requiring elaborate ana- lyses of edge structure. The method follows from basic properties of image edges - occluding contours and cast shadows in particular will be con- sidered -and from basic properties of scene struc- ture. Occluding contours are “seams” in the pro- jective fabric, curves across which surface points that may be widely separated in space are juxta- posed in the image by the vagaries of projection. Cast shadows are curves across which the shadowed surface’s image undergoes a systematic (ideally, linear) transformation. Given this characterization, two basic complementary properties of scenes suffice to distinguish cast shadows and occluding contours from each other and from arbitrary image curves: (1) the processes that shape, color, and illuminate natural surfaces act continuously almost everywhere, and (2) the properties of widely separated scene constituents are independent. Given these properties, we conclude that inten- sities at nearby points on either side of an arbi- trary curve in the image are likely to be highly correlated, due to the coherence of surface struc- ture; intensities at nearby image points across an occluding contour are likely to be uncorrelated, because they are actually the projections of distant, hence independent, surface points; and intensities across a cast shadow edge are likely to be highly correlated under a systematic {ideally, linear) transformation, arising from the interaction of the continuous underlying surface structure with the illumination transition. These observations are applied to image edges by constructi= a family of parallel curves around the edge, and performing a sequence of linear regressions of the intensity values along each curve onto those along its neighbor. The following behavior is predicted for each of several edge types: l A precipitous drop in correlation at the nomi- nal edge location signifies an occluding contour. l High correlation with an abrupt shift in the regression parameters signifies a shadow. l Sustained high correlation with the additive and multiplicative regression parameters near zero and unity, respectively, implies that no significant edge is present. l Low correlation throughout implies lack of coherent surface structure, and no edge type can be assigned. These ideal structural “signatures” were shown to closely predict regression sequences obtained from images of natural edges. 2. Defining the Problem Because edge types are defined in terms of the scene events they denote, any method for edge sorting must provide some basis for discriminating those events by their appearance in the image. We therefore begin by characterizing the distinctive properties of occluding contours and cast shadow edges, and defining the computational problem of identifying those edges. Occluding contours: the projective mapping from image to scene tends to be continuous, because physical surfaces tend to be continuous. Almost everywhere in a typical image, therefore, nearby points in the image correspond to nearby points in the scene. This adjacency is preserved 36 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. over any change in point of view or scene configuration, short of rending the connected sur- faces of which the scene is composed. The distin- guishing property of occluding contours (which may be defined as discontinuities in the projective map- ping) is their systematic violation of this rule: the apparent juxtaposition of two surfaces across an occluding edge represents no fixed property of either surface, but is subject to the vagaries of viewpoint and scene configuration. For example, if you position your finger to coincide with a particu- lar feature on the wall or outside the window, a small change in the position of head or hand may drastically affect their apparent relation. Because the false appearance of proximity is the hallmark of occluding edges, the problem in identifying those edges may be cast as that of distinguishing in the image the actual proximity of nearby points on con- nected surfaces from accidental proximity imposed by projection. Cast shadows: cast shadows in outdoor scenes usually represent transitions from direct to scat- tered illumination caused by the interposition of an occluding body between the sun and the viewed sur- face. The problem in identifying cast shadows is to distinguish these transitions in incident illumination from changes in albedo, surface orientation, and so forth. This discrimination presents a problem because the effects of all these parameters are con- founded in the image data- a change in image brightness may reflect a change in albedo or sur- face orientation, as well as incident illumination. Because the relation among illu-minaiion, reflectivity, orientation, and image irradiance is well known, the presence of shadows in an image could be readily detected if a constant reference pattern could be placed in the scene: when the apparent brightness of a constant pattern varies with location, the change in brightness must, by elimination, be attributed to a change in illumina- tion. Of course such active intervention is generally impractical. The problem may be viewed as that of achieving the effect of viewing a constant pattern across the shadow edge, without actually placing such a pattern in the scene. This could be achieved if some fixed relation were known to hold between the surface strips on each side of the shadow edge. In short, occluding contours are curves across which points that may be distant in space are placed in apparent juxtaposition by projection, violating the continuity of the projective mapping that holds over most of the image. To identify occluding contours therefore requires that actual proximity be distinguished from apparent proximity imposed by projection. Cast shadows edges are contours across which the pattern of surface reflectance has been systematically transformed by an abrupt change in illumination. To identify cast shadow edges, the effects of illumination must be distinguished from those of afbedo and surface orientation, as if a constant reference pattern had been placed across the edge. 3. Continuity and independence The solution we have devised rests on two sim- ple complementary principles: (I) Continuity: Surfaces, surface markings, and illumination are almost everywhere continuous. Therefore, the pro- jective mapping is almost everywhere continuous, and image intensities at nearby points tend to be highly correlated. (2) Independence: The factors governing the structure of a scene -the shapes of objects, and their placement with respect to each other, to illuminants, and to the viewer - are so complex that properties of dist.inct or widely separated scene constituents may for most e ur- poses be regarded as causally independent. This independence princi r le general position,[4], 81 is related to the principle of which assumes isotropy for viewpoint and object position and orientation.) One simple measure of continuity or coherence across an image curve (there are many others) is linear correlation between the image intensities a small distance on either side of the curve: a high positive correlation implies that the image strips on either side of the curve are closely related, a low correlation implies no (linear) relation. Given a high correlation, a regression equation can be com- puted to describe the linear transform relating the intensities across the curve. Several predictions about correlations and regressions across edges fol- low from the continuity and independence princi- ples: The continuity principle implies that high correlations should often be observed across arbi- trarily selected curves in the image. However, a low correlation could just imply low contrast or frag- mented surface structure. The independence principle implies that high correlations should almost never be observed across occluding contours, because the points meeting along those curves are not the projections of nearby points in space. The independence principle implies that a cast shadow edge would not have any unusual properties, were the shadowing body removed, because the light source and the shadowed and shadowing objects do not “conspire” to acheive special align- ments. Therefore, apart from the effects of the sha- dow itself, a cast shadow edge should show the same correlational properties as an arbitrarily selected curve. Shadows in outdoor scenes are often transi- tions between two roughly constant levels of illumi- nation -scattered and direct -and the effect of an illuminant change on image intensity may be very roughly idealized as linear. To the extent these idealizations hold, linear correlations across cast shadow edges are likely to be as high as those across arbitrary curves, but the illumination transi- tion will appear as a perturbation of the regression equation (ideally, a multiplictive factor for linear digitization, an additive one for logarithmic digitiza- tion.) Given a candidate edge in the image, these observations leave us with the strong implication that a high correlation of intensities across the edge excluo!es the possibility that the edge is an occluding contour, and the somewhat weaker impli- cations that a low correlation signals an occluding contour, and that a high correlation through a sub- stantial linear transform signals a shadow. The latter implications are weaker because lack of correlation may just signify conditions, such as low contrast, that don’t favor correlation. Our conclu- sions about occluding contours and shadows may be strengthened by examining a larger neighborhood around the edge. By embedding the given edge in a series of parallel curves, a sequence of regressions can be performed one onto the next. A low correla- tion throughout signals low contrast or lack of tex- ture, and no conclusion can be drawn. However, a 37 sharp notch in an otherwise high correlation, where the regression sequence crosses the edge, argues against global low contrast or lack of texture, pro- viding a stronger indicator that the edge is an occluding contour. Likewise, sustained high corre- lation with an abrupt perturbation in one or both regression paramaters is good evidence for a sha- dow. Finally, sustained high correlation with no perturbation of the regression equation provides evidence that the edge is not physically significant. 4. Implementation and results Our implementation assumes that an edge has been located by edge-tiding techniques. Hand- traced edges and zero-crossings in a V’G convolu- tionllo] were tried as inputs. A parallel family of curves was constructed around the edge as follows: at fixed intervals of arc length along the edge, a line normal to the edge was constructed. The set of points lying some fixed distance from the edge along each normal line then deties a “parallel” curve. A family of curves was constructed by vary- ing that perpendicular distance. This construction amounts to warping a strip of the image, surround- ing the edge, into a rectangular region, whose cen- tral column corresponds to the original edge. The vertical dimension of this “rectified strip” denotes arc length on the edge, and the horizontal dimen- sion denotes perpendicular distance from the edge. The columns surrounding the central one correspond to parallel curves on either side of the edge. Intensity values for the rectified strip were obtained by bilinear interpolation of the intensities in the original image, to reduce quantization error. Once the rectified strip was constructed, a sequence of linear regressions was performed between columns. To avoid spurious correlation imposed by the imaging and digitizing process, regressions were computed between the ith column and the (i+2)th. The outcome of this computation was a normalized correlation, an additive regression term, and a multiplicative regression term, each a function of column position. The midpoints of these plots represent the regression across the original edge. Points to either side of the plots’ midpoints represent regressions between adjacent parallel curves on either side of the original edge. The idealized edge type “signatures”, as developed in the preceding section, are shown and explained in fig. 1. in terms of these regression sequence plots. Actual image edges, together with rectified strips and regression plots, are shown and described in fig.s 2-5. (The edges in these cases were hand-traced.) Marked correspondence between the actual and idealized plots is evident to inspection. No attempt has yet been made to auto- mate this comparison, although a variety of simple thresholding schemes might well prove empirically adequate to the classification task. 5. Conclusions Two conclusions may be drawn from these results: fist, the correlational properties of inten- sities in the neighborhood of an edge carry impor- tant information about the edge’s physical significance, and second, the very basic principles of continuity and independence can provide strong an useful constraints on image interpretation. Acknowledgements This work was performed while the author was a member of the Artifical Intelligence Center at SRI, International. Support was provided in part by NASA contract #NASl-16282, NSF grant #MCS- 7901830, and DARPA contract #DAAG-29-79-C-0216. REFEXENCES El1 PI PI I31 l-51 PI PI l-81 WI Huffman, D.A, Impossible objects as nonsense sentences, in:, Machine Intelligence, Edinburg University Press, Edinburgh, 1971. Clowes, “On seeing things”, A?-tij?ci& htelti- gence, 2( 1971), 79-l 12. Waltz, D.L., Generating semantic descriptions from drawings of scenes with shadows, AI-TR- 271, MIT, Cambridge, MA, 1972. Marr, D. C., “Analysis of occluding contour”, Proc. Roy. Sot. Land., 19?(197‘7), 441475. Stevens, K.A., “The visual interpretation of sur- face contours”, hrti$?cial Intelligence, 17(1981), 47-74. Witkin, A.P, “Recovering surface shape and orientation from texture”, Arti&ial Intelli- gence, 17( 1981), 1746. Barrow, H.G. and Tenenbaum, J.M., “Interpret- ing line drawings as three dimensional sur- faces”, htificiul htelligence, 17( 1981), 75- 116. Binford, T.0, “Inferring surfaces from images”, A&$cci& Intelligence, 17( 1981), 205-244. Horn, B. K. P., “Understanding Image Intensi- ties”, Artificial InteLLigence, 21, 1 l( 1977), 201-231. [lO]Marr, D. C. & Hildreth, E., “A theory of edge detection”, MITAI Memo 518(1979). 36 CORRELATION SLOPE INTERCEPT F’IGURF. l-Idealized edge signatures. Each row of plots represents an idealized sequence of linear regressions across an edge. (a) Occluding cont.our: a sharp notch in correlation across the edge. The plots for the slope and intercept of the regression equa.tion are broken to indi- cate that the regression equation is meaningless in the low-correlation area. (b) Cast shadow: Sustained high correlation across the edge, with disturbance of one or both regression parameters. The nature of this distur- bance depends on the sense of the edge (i.e. whether the shadow lies on the left or right), and on details of the imaging and digitizing process. In practice, nonllneari- ties perturb the correlation slightly. (c) No edge present: sustained high correlation without disturbance in the regression parameters implies that the edge is not physically sigmficant. An additional case, not i!lustrated, is that of low correlation throughout. This indicates low contrast or lack of surface structure, implying that no descision about edge type can be made. FIGURE 2-Examp!e of an occluding edge. In the upper left is the original image, with the selected edge highlighted. The upper right shows the rectified strip, whose midline corresponds to the edge. Beiow are the plots for correlation, slope, and intercept. The overall high correlation, with a sharp piu>ge near the edge loca- tion, corresponds closely to the idealized form of Fig. ia. Remember that slope and intercept of the regression line are meaningless where the correlation is low. 39 FIGURE 3--P, low-contrast occluding edge. &4ilthough the edge contrast is low, the correlation across the edge dips to about .15, compared to about .85 in the surrounding region. FIGURE 4-A cast-shadow edge. The shadow transition appears primarily as a dip in the slope of the reg~~ss:on line (a dip rather than a bump because the left-to-right transition is from light to shadow.) The perturbation in the additive regression term, and the dip in correlation to about .75, are due prnnarlly to nonhnearlties I:I the film response. 40 FIGURE 5- Regression ri-here 20 ph)-sical e?cie is present. As expected, a fairly high correlation, with stable regression parameters, is maintained across the “edge”. 41
1982
33
127
SEGMENTATION OF IMAGES INTO REGIONS USING EDGE INFORMATION* Gerard G. Medioni Intelligent Systems Group University of Southern California Los Angeles, California 90089-0272 ABSTRACT using A method for segmentiw aerial images edge information to create regions of similar or smoothly varying intensity is discussed, directly as Rqion segmentation using edges input cannot be successful because boundaries are seldom perfectly closed. In the present method, we preprocess the edge image to close gaps and create a binary image from which we extract the connected regions. We compare the results obtained with this method ar-d a traditional region-splitting method for 2 different views of an aerial scene. I INTRODUCI'ION There have traditionally been two main approaches to the segmentation of images, edge based and region based. - Edge based methods discontinuity points Froceed by locating and connecting in the intensity image them to obtain primitives. They have the advantage of preserving most of the information present in the intensity picture but produce very low level primitives even after further processing (segments) [l]. They are very appropriate to describe elongated objects such as roads and rivers. - Region based methods proceed either by merging regions that have similar intensity and a weak boundary separating then recursively KWI, or by splitting regions using a threshold defined by histograms [4]. This last technique is very effective on multispectral images. These methods produce higher level primitives (regions with a set of attributes) , but most of the time these regions do not correspond to physical entities unless their intensity differs everywhere from the background. If the contrast is too weak, * This research was supported by the Defense Advanced Research Projects Agency and was monitored by the Air Force Wright Aeronautical Laboratories under Contract F-33615-80-C-1980, ARPA Order No.3119. the object will "leak" and will be merged with its background. We present here a method trying to combine the good points of the 2 methods described above. It substantially differs from the expansion- contraction aporoach used by Perkins [S] to bridge gaps in the edge image, and does not require an object's interior to contrast with its surround as in Milgram's "superlice" technique [6]. II DESCRIPTION OF THE MEYI'HOD From the grey-level image, we extract the edge points and organize them into linear segments. Using this edge information, we create a new image in which pixels beloming to an edge segment get an intensity depending upon the contrast of the edge at this point and the total length of the segment. We now bridge the gaps in the edges by replacing the intensity at each point by the sum of the intensities in a small square window centered at this point. Hy thresholding this new image, we obtain a binary image from which we extract the connected regions of intensity 0. These regions are smaller than the expected ones because of the smoothing process, so we expand each one individually to obtain the final result. 4. - Processing the grey-level image We first extract the edges from the image, thin them and link them using the technique developed by Nevatia and Babu [l]. The final primitives we obtain are SEGMENTS, linear pieces approximating a set of edge points. The attributes of a segment are its 2 end points, its length L and its strength S, which is the sum of the contrast of each point. Since we want to eliminate the gaps in the edges following the boundary of an object and reduce the influence of small random or textured edges, we create an image f(i,j) as follows: if (i,j) belongs to a segment SEG then if LENGI!H[SEG] < MINLENGI'H then f (i,j) = LENGTH/SIRENGI'H else f(i,j) = STRENGTH else f(i,j) = 0. 42 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. This non-linear process permits us to recognize long (>MINLENGTH) segments and to give a high weight to their points. B. - Summing the Image -- Given the image f(i,j) , we use a simple texture/no-texture discrimination process by creating a,new image g(i,j) as follows: s(i,j) = c" E 31 ,m) l=i-n m=j-n That is, g(i,j) is the sum of f(i,j) in a square window of size 2n+l centered in (i,j). We then threshold this image to get a binary version of it: h(i,j) = 0 if g(i,j) < THR = 1 otherwise. C. - Extracting Regions From the image h(i,j), we extract all connected regions of intensity 0. Each region represents a shrunk version of a region in which no edges, or very small and weak edges, are present and the gap between the edge and the border created by the edge is n oixels, n being defined above. In order to reconstruct the physical region, we use a growing procedure on each region as described in [71: for each pixel, we consider a square window of size 2n+l centered at that point, and set the pixel to 1 if any pixel in the window is 1. One problem with this technique is that some corners get rounded. D. Interpretation - Each region now corresponds to a set of edges forming a nearly closed boundary enclosing this region. These regions can be further filtered by looking a 3 their attributes, such as area, ratio of perimeter /area and others. They can be the input of a region matching program or can be looked at individually to see if there is an adjoining projected shadow. III RESULTS We tried the above procedure on 2 views of the same scene showing part of the Fort Belvoir Military Reservation in Virginia. The original images have a resolution of 600 by 600 and are shown on figures la and 2a. Figures lb and 2b show the segments extracted from the intensity array. Note that the boundary of the large building in the lower left of fig. lb is not closed, or even nearly closed. Figures lc and 2c show the image after summation. The following parameters were used: n = 4 (that is, windows are 9 by 9), MINLENGIH = 12 (minimurn length of a segment for non-linear processing). From these images we extract connected regions of intensity < 150. We now expand each region individually and filter out all regions with a value of perimeter2/area > 35 to obtain the final result, as shown on figures Id and 2d. As we can see, no buildings are missed and their shape is rather well conserved. Figures le and 2e show the set of regions obtained by a conventional region splitting [4]. In both images the large building in the lower left is totally lost and some other buildings are merged into a single region. IV aNCLUSION The method described above provides better segmentation than region growing or region splitting techniques without semantic information. Computing histograms, especially on inonochronatic images, does not always provide a good threshold, even though edges define a clear boundary, We are currently investigating the exact effect of the parameters and a segmentation method coordinating edge information and region splittiq. [ll 121 [31 [41 151 [Cl c71 V REFERENCES R. Nevatia and K. Famesh Babu, "Linear Feature Extraction and Description," Computer ?%aphics and Image Processing, Vol. m-269. 13, June 1980, pp. c. F. Brice and C. L. Fennema, llScene Analysis Using Regions," Artificial Intelligence 1:3, 1970, pp. 205-226. Y. Yakamwsky and J. A. Semantics-Based Decision Feldman, "A Theory Region Analyser," Proc. IJCAI-73 Stanford, 1973, pp. 580-588. R. Ohlander, K. Price and R. Reddy, "Picture Segmentation Using a Recursive Splitting Method," Computer Graphics and- Image Processing, Vol. - - 8, 1978, pp. 313-333. A. Perkins, "Region Segmenta tion Of Images Expansion and Contraction of Edge l?O ints," proc.- IJCAI-79 Tokyo, Aug. 79, pp.- 699-701: D. Milgram, "Region Dxtraction Using Convergent Evidence," Computer ,Sraphics and Image Processing, Vol. 11, No. 1, 1979, Pp 1-12. A. Rosenfeld and A. C. Uak, "Digital Picture Processim," Academic Press, 1976. 43 xe la. Or ig in1 imiqe resolution 6OCIx600 Figure 112. 7 . Summecl liTIsyJe Figure lb, Seqyents extract& from t% or iqinal in;aTe Figure 14. Req ions froln the smm~? iwqP Figure le. Fmions obtsined by splittig :me thod 44 Diqure 2a. 3riainal imqe resolution 6!Nx50@ FiTure 25. Seqaents extracted fron the or iq inal imge Fiqurc 2d. Req ions from the summed iina,;e Figure 2e. Rgions obtained by splitting methd 45
1982
34
128
A SYSTEMATIC APPROACH TO CONTINUOUS GRAPH LABELING WITH APPLICATION TO COMPUTER VISION* M. D. Diamond, N. Narasimhamurthi, and S. Ganapathy Department of Electrical and Computer Engineering University of Michigan Ann Arbor, MI 48109 ABSTRACT The discrete and continuous graph labeling problem are discussed. A basis for the continuous graph labeling problem is presented, in which an explicit connection between the discrete and continuous problems is made. The need for this basis is argued by noting conditions which must be satisfied before solutions can be pursued in a formal manner. Several cooperative solution algo- rithms based on the proposed formulation and results of the application of these algorithms to the problem of extracting line drawings are presented. I XHECIONTINlrnrlrGRAPHLAREXWGJzQRLEM A graph labeling problem is one in which a unique label, A from a set A of possible labels must be assigned to each vertex of a graph G = (V,E). The assignment must be performed given information about the relation- ship between labels on adjacent vertices and incomplete local information about the correct label at each vertex. In a discrete graph labeling problem [ 1,2,3], the local information consists of a subset, & s A, of the label set associated with vertex vi E V, from which the correct label for each vertex must be chosen. The contextual information consists of binary relations Ru s Axh, referred to as constraint relations, assigned to each edge vivj E E. The function of the constraint relations is to make explicit which labels can co-occur on adjacent vertices. The graph, label set, and constraint relations together form a constraint network [2,5]. An (unambi- guous) labeling is a mapping which assigns a unique label h E A to each vertex of the graph. A labeling is con- sistent if none of the constraint relations is violated, that is, if label h is assigned to vertex Vi and label h’ is assigned to vertex vj then the pair (h,X’) is in the con- straint relation Rii for the edge ViVj E E. Given initial labeling information, several search techniques have been developed which can be used to derive consistent labelings. The original backtracking search described by Waltz [ 11 was later implemented in parallel by Rosenfeld et al. [6], resulting in the discrete relaxation operator. At the same time a continuous analogue, the continuous graph labeling problem was * This work was supported in part by the Robotics Research Laboratory, and in part by the Ultrasonics Im- aging Laboratory both in the Department of Electrical and Computer Engineering, University of Michigan. proposed, as well as a continuous relaxation algorithm for its solution, and since then several other relaxation algorithms have been proposed [7,8]. In a continuous graph labeling problem, the initial information consists of strength measures or figures of merit, pi (Aj), given for each label Aj E A on each vertex Vi E I! The strength measures are assumed generated by feature detectors which are making observations in the presence of noise. They usually take onvalues in the range [O,l], a 0 indicating no response, and a 1 indicat- ing a strong response. The contextual information, which is represented in terms of constraint relations for the discrete graph labeling problem, are replace by measures of compatibility, usually taking values in the range [-1,1] or [O,l], which serve to indicate how likely the pairs of labels are to co-occur on adjacent vertices. Several problems have resulted in the extension of the graph labeling problem from the discrete to continu- ous case. In the discrete case the presence or absence of a pair in a constraint relation can be determined with certainty depending on what labelings are to be con- sidered consistent. In the continuous case, however, there is apparently no formal means to assign specific numeric values to the compatibility coefficients, partic- ularly for shades of compatibility between “impossible” and “very likely”, although several heuristic techniques have been proposed [7,9,10]. Furthermore, with respect to a constraint network, the concept of consistency is well defined, The objective the continuous relaxation labeling processes has often been stated to be that of improving consistency, however, the definition for con- sistency has not been given explicitly. This latter issue is circumvented in several of the optimization approaches which have been proposed [11,12,13], where the an objective function, defined in terms of the compa- tibility coefficients and the initial strength measures is given. However, because of the dependence of the objec- tive functions on the compatibility coefficients, and because no real understanding of the role which these coefficients play yet exists, it is often difficult to describe the significance of these approaches in terms of what is being achieved in solving the problem. In an alternate approach to the continuous graph labeling problem [14] an attempt has been made to maintain the characteristics of the original problem while allowing for more systematic approaches toward a 50 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. solution. It is felt that solutions to the reformulated problem will be more useful because it will be easier to relate the results of the solution algorithm to what is being achieved in the problem domain. In order to develop this approach, we review the characteristics of the solutions to the graph labeling problem which have appeared so far (refer to Fig. 1). The inputs to the process are the initial strength measures ipi'( i=l,s..,W 0 j=l ,...,mj which can be represented by an 7~ x’I)?. dimensional vector: F = (p 10 (AI), p : @2), . . . , p:(&)) E Pm. Since the selection of a particular label at a given vertex is related to the label selections made at other (not necessarily adjacent) vertices, information about the label selection at that vertex is contained in the initial labeling values distributed over the extent of the net- work. The function of the global enhancement process, g, is to accumulate this evidence into the labeling values at the given vertex. The output vector: is used by a process of local maxima selection [ 151, s , to choose a labeling: r; = (A,, 43 . . . I &J, where & is the label assigned to vertex ui. Thus g is a function, g:Rnm + Rnm and s is a function is s:Rnm + C,,(A), where C,(A) is the set of possible label- ings. The hope is that labeling resulting from the pro- cess s (e @)) is an improvement over the labeling result- ing from direct local maxima selections @). If a numerical solution is to be sought for this prob- lem, then a formal definition must be given to the con- cept of an improved labeling. In previous work, particu- larly with respect to computer vision, improvements were rated subjectively, or in the case of an experiment global enhancement process, g Fig. 1: Function of the global enhancement process: X represents an improved labeling with respect to 2. local -+ g+ maxima I-! -x selection, S where the solution was known in advance, by the number of misclassified vertices. In our formulation this issue is resolved by assuming that the problem domain specifies an underlying constraint network, or can be modeled to do so. The objective is then to use the initial information to choose a labeling that is (a) consistent with respect to this constraint network, and (b) which optimizes a prespecified objective function. In this extension from the discrete to the continuous graph labeling problem, the constraint relations remain intact. We are currently investigating optimal solutions to this formulation of the graph labeling problem based on a maaGnu~-sum decision rule, that is, the rule is to choose a consistent labeling such that sum of the initial labeling values is maximal. A solution to this problem could be extended in a straightforward manner to cer- tain well established decision rules such as,is found, for example, in nearest neighbor classification. Though the decision rule serves to make explicit what is meant by an improved labeling, it is defined glo- bally. The problem remains to implement it in terms of a cooperative process. The concept of a cooperative process, although not well defined, can be characterized in terms certain general properties [U1179]. Our research is into algorithms which exhibit certain of these properties, such as locality, and simplicity. In an aptimul solution, the labeling algorithm must, further- more, perform the label selection in accordance with the given decision rule. Other important issues, such as speed of convergence are also being addressed. Two approaches which have some of these properties are demonstrated in the following section. The first is an heuristic approach based on dynamic programming [ 141 which converges very rapidly and with good results, but does not guarantee a consistent labeling. The second approach is based on linear programming. Details on the latter algorithm will be presented at a later date. III -- In this section, we demonstrate the application of the two approaches discussed above to the problem of extracting polygon approximations of the outlines of objects in a scene. The experiments described here are based on the reconstruction of simple closed curves (Fig. 2) when noise has been added to the initial labeling values. The graph used in this experiment is a 16.by 16 ras- ter. Each vertex is represented by a pixel, and is adja- cent to its eight immediate neighbors. The associated label set is shown in Fig. 3. A pair of labels on adjacent pixels are consistent if an outgoing line segment is not broken across a common border or corner, and incon- sistent otherwise. Examples of consistent pairs of labels are given in Fig. 4, and examples of inconsistent pairs of labels are given in Fig. 5. *In terms of decision theory, every consistent label- ing constitutes a class and the input vector p is a point in an 7~x9~~ dimensional feature space. 51 Fig. 2: Initial labeling lzl lzl til la vi III 6l El q El q Ic El q lil III Gl w El q q Fig. 4: Examples of locally consistent label pairs. Uniformly independently distributed noise was added to the labeling values at each pixel resulting in the labeling, by local maxima selection, shown in Fig. 6. The two cooperative algorithms were applied to the ini- tial labeling in attempt to reconstruct the original curves. The first is the dynamic programming approach with data moving along the eight major directions of the raster (two horizontal, two vertical, and four diagonal). The second is the algorithm based on a linear program- ming approach. The performance of these algorithms are presented in Fig. ‘7 and Fig. 6, which show the result- ing labeling (by choosing the label with greatest strength at each pixel) after 2 and 4 iterations. The dynamic pro- gramming approach reaches a fixed point after 2 itera- tions, however, the result is not a consistent labeling. The linear programming algorithm reconstructs the ori- ginal labeling after six iterations. IV IIMXzWB Our interest here has been to restate the continu- ous graph labeling problem in a manner which allows for a systematic approachs to a solution. The formulation which we have presented amounts to the classification of Fig. 3: Label set for line drawing description. Fig. 5: Examples of inconsistent label pairs. 52 1 I I VI IU I I I III nwl I DI I Fig. 6: Initial labeling plus noise. I i i ii L-44 i ii i Fig. 8a: Output of the linear programming algorithm after two iterations. Fig. 7: Output of the dynamic programming algorithm after two iterations. Note: the algorithm has reached a fixed point. Fig. 8b: Output of the linear programming algorithm after four iterations. 53 consistent labelings according to a prespecified decision rule. As with previous approaches, consistency is defined on a local basis to make sense with respect to a [101 particular problem. For example, if the objective is to extract continuous curves as in the experiment described above, consistency is maintained between pairs of labels when the scene events they represent do 1111 not allow for broken lines. The global nature of the deci- sion rule leads to a more intuitive description of what the techniques accomplishes with respect to the original [12 problem. However, as a consequence, the problem of implementing this rule on a local basis arises. Two approaches to the reformulated problem have been demonstrated above. Our present feeling is that a [ 13 linear programming approach should yield an optimal solution to the continuous graph labeling problem based on a maximum-sum decision rule. However, the restric- tion that the algorithm must be implemented in a local manner has led to some theoretical problems, such as resolving cycling under degeneracy which remain to be [l-+1 solved. Our investigation into these problems is continu- ing. Obviously, the value of this approach and any tech- niques which may be derived from it will depend on whether or not real world applications can be modeled in [151 such a manner so that the absolute consistency between pairs of labels is meaningful. We hope to demonstrate this in at least one problem, deriving line drawings from real world scenes, in forthcoming results. PI PI L31 WI bl PI PI ml PI WI REFERENCES D. L. Waltz, “Generating semantic descriptions from drawings of scenes with shadows,” Technical [171 Report A1271, M.I.T., 1972. U. Montanari, “Networks of constraints: funda- mental properties and application to picture pro- cessing,” Information Sciences, vol. 7, pp. 95- 132, 1974. R. M. Haralick and L. G. Shapiro, “The consistent labeling problem: part I,” IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-1, pp. 173-184, 1979. R. M. Haralick and L. G. Shapiro, The consistent labeling problem: part II,” IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-2, pp. 193-203, 1980. A. K. Mackworth, “Consistency in networks of relations,” Artificial Intelligence, vol. 8, pp. 99- 118, 1977. A. Rosenfeld, R. A. Hummel, and S. W. Zucker, “Scene labeling by relaxation operations,” IEEE D-an-s. Syst., Man, Cybern., vol. SMCQ, pp. 420- 433. S. Peleg, “A new probabilistic relaxation scheme ,” IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-2, pp. 362-369, 1980. R. L. Kirby, “A product rule relaxation method,” Comput. CA-aphics Image Processing, vol. 12, pp. 158-189, 1980. S. Peleg and A. Rosenfeld, “Determining compati- bility coefficients for curve enhancement relaxa- tion processes,” IEEE Trans. Syst., Man, Cybem., vol. SMC-8, pp 548-555, 1978. H. Yamamoto, “A method of deriving compatibil- ity coefficients for relaxation operators,” Com- put. Graphics Image Processing, vol. 10, pp. 256- 271, 1978. S. IJllman, “Relaxation and constrained optimiza- tion by local processes,” Comput. Graphics Image Processing, vol. 11, pp. 115-125, 1979. R. A. Hummel, and S. W. Zucker, “On the founda- tions of relaxation labeling processes,” 7+8@7, Dept. of Elect. Eng., McGill University, Montreal, Quebec, Canada. 0. D. P’augeras, and M. Berthod, “Improving con- sistency and reducing ambiguity in stochastic labeling: an optimization approach,” IEEE Trans. Pattern Anal. Machine Intell,, vol. PAMI-3, pp. 412-424, 1981. M. D. Diamond, and S. Ganapathy, “Cooperative solutions to the graph labeling problems,” Proc. PRIP 82 Conference on Patt. Recognition and Image Processings, June, 1982, to appear. S. W. Zucker, Y. G. Leclerc, and J. L. Mohammed, “Continuous relaxation and local maxima selec- tion: conditions for equivalence IEEE Trans. Pat- tern Anal. Machine Intell., vol. PAMI-3, pp. 1:7- 127, 1981. L. S. Davis, and A. Rosenfeld, “Cooperating processes for low-level vision: a survey,” TR-123, Dept. of Computer Science, University of Texas, Austin, 1980. E. C. Freuder, “Synthesizing constraint eupres- sions ,I’ Comm. ACM, vol. 21, pp. 958-966, 1978. 54
1982
35
129
DETERMINING SURFACE TYPE FROM SURFACE NORMALS Paul Amaranth William Jaynes Academic Computer Services Intelli.gent Systems Laboratory Oakland University and Department of Computer Science Rochester, MI 48063 ABSTRACT By exploiting the relationship which exists between the Gaussian image and gradient space, the two representations can be utilized in conjunction to facilitate the interpretation of a set of surface normals. The projection of the Gaussian image onto a plane can provide clear traces which quickly yield general knowledge of the underlying surface. I I ntroduct ion There are many methods in low-level machine vision which derive the 3-D surface orientation at each coordinate point in an image. Optical flow Cl 981, texture gradient c71, and photometric stereo [2,6] are all techniques used to generate these surface norma 1 vectors. Methods have been suggested for object recogni t i on [5] and segmentat ion [3] us i ng these normals. In general, however, the process of interpreting this local information to infer the underlying surface type is not well understood. This paw addresses the question: Given a collection of surface normals generated from a single, smooth surface, what can be determined about the surface? Using a variation of the gradient space, we present an approach for distinguishing simple surfaces. Our representation is purposely simple and limited as our object is to quickly obtain some basic knowledge of the surface. I I Representation If spatial information is ignored and all surface normals generated from an image are translated to a common origin, the gaussian image is formed [4,8]. This image can be seen as the locus of points formed by the intersection of the unit normals with a unit sphere called the Gaussian sphere. Any rotation in the scene causes a corresponding rotation of the Gaussian sphere about its origin. Histograming the X and Y components of the unit normals produces a 2-D approximation of the Gaussian image cal led the Gauss ian histogram [3]. At the present time, we do not utilize the cell counts of the histogram, Wayne State University Detroit, Ml 48202 but only the binary image. This binary image is a discreet version of the space we term the Unit Gradient Space (UGS). Analysis using the UGS provides a promising technique for quickly acquiring basic information about the surfaces under consideration. I I I Image and Gaussian Space The image mode 1 is entirely viewer centered. The observer looks along the Z axis with the Y axis pointing up and the X axis pointing to the right. The scene is orthogonally projected onto the image at Z=l. With the image coordinate system defined, we now consider the coordinate system of the various gradient spaces. The UGS is closely related to gradient space and the gaussian sphere. If the equation for a smooth surface is of the form z=f (x,y), then the observer directed normals to that surface are given by: Cp, q, -11 = C x/ 2, y/ z, -11 eq. 1 (Fig. 1) Gradient space consists of the points [p,q] each of which represents a parti;ilar surface orientation. The gaussian sphere a sphere of unit radius. Each point on its surface represents the orientation of the plane tangent to the sphere at that point. The gaussian sphere can be defined as tangent to the gradient space at the gradient space origin. Kender c71 illustrates the relationships among the various spaces very clearly. If the gaussian sphere and Fig. 1. The ob server dir ected norma S of conica 1 and a CYl indric al sur ace. a From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. the gradient space are embedded within the scene coord i nate sys tern such that the following conditions hold: 1) The gradient space is defined by the plane Z=-1 2) The p and q axes are aligned with and of the same units as the x and y axes 3) The center of the Gaussian sphere is at the origin then a number of interesting properties become readi ly understandable. For the current discussion, however, the most important result is that the rotational coupling of the various spaces becomes obv i ous . Refer to figure 2. Gradient space then corresponds to a central project ion of one hemisphere of the gaussian sphere onto the Z=-1 plane. The UGS can clearly be seen as an orthogonal projection of the same hemisphere onto the Z=-1 plane. Hence, it is a bounded version of gradient space and consists of the points CPl9 q13 = CP. sl / IlCP9 q9 -1111 eq. 2 IV Discussion We are concerned with surfaces, not entire objects [5] and, at this point, we wil 1 narrow our perspective to encompass only single surfaces. As a further restriction, we will consider only conical and cylindrical surfaces with a circular cross section. We do this for two reasons. First, the form of their Gaussian images are straightforward and easy to visualize. Cylinders lie along arcs of great circles and cones form arcs on lesser circles. Secondly, they are singly curved surfaces and therefore representative of a class of surfaces intermediate in complexity; falling between genera 1 and planar surfaces. In particular, any singly-curved surface must result in a one- parameter curve in gradient space [lo] and, consequently, also in the UGS. Given a curve in the UGS, we want to know if we can determine whether it indicates a cylinder or a cone and what, if any,, parameters of the surface can be determined. First, let us assume that the axis of the cone or cyl inder generating the surface is parallel to the image plane. Since a rotation about the Z axis results in a corresponding rotation in the UGS, we can assume without loss of generality that the axis is parallel to the Y axis. It can be shown that a right circular cylinder results in points on a curve in the UGS given parametrically by Pl = sin(t), ql = 0 eq. 3. 4 The curve is given by in the UGS for a right circular cone Pl = b*sin(t) / (b2 + l), ql = 1 / (b’ + 1) eq. 5, 6 where b is the ratio of height to the radius of the base. These equations show that in the given orientation, conical and cylindrical surfaces give rise to straight line segments in the UGS. I ntui tively, it is easy to see that the projection of an arc of a sphere onto a plane perpendicular to the plane defined by that arc will result in a line segment. The UGS line for a cyl i nder will pass through the origin, while that of the cone cannot. Further, the ratio of the height to the radius of the base of a cone is given by b = 1 / (1 / q12> - 1 eq. 7 Gradient :s Y Space , ; Unit Gradient 1 Space I Fig. 2. The embedding of the Gaussian sphere, gradient space, and the unit gradient space i nto the scene coordinate system. In real scenes, it is unlikely that objects wi 11 be so conveniently oriented. As a consequence, the UGS represenation of either a cone or a cylinder in some arbitrary orientation may describe a section of an ellipse. Consequently, the various surf aces are not distinquishable. Another possibility for standard orientation is suggested by the fol lowing observation. If a singly-curved surface is oriented such that a plane tangent to the surface is parallel to the image plane, the UGS curve of that surface will pass through the origin. The point on the curve at the origin corresponds to the rul ing on the surface determined by the tangent plane. Conversely, if the UGS curve of a singly-curved surface passes through the origin, then there exists a ruling on the surface with a tangent plane which is parallel to the image plane. Thus, a standard orientation can be defined as one in which some ruling of the surface is parallel to the image plane. For a cylinder, this is equivalent to setting the axis of the surface parallel to the image plane and results in a straight 1 ine 56 through the origin of the UGS. The UGS curve for a cone will still lie on an ellipse, however. Thus we have a fast method of distinguishing the surfaces. In this standard orientation, equation 7 wi 11 not necessarily be valid. It is still possible to recover the ratio of height to the radius of the base, however, by exploiting the fact that symmetries in a surface are reflected by symmetries in the Gaussian image. The center of the arc in the Gaussian image will correspond to a ruling down the center of the cylinder or cone. The following procedure is based on this observation. symmetrical surfaces, this has the effect of aligning the projection of the y axis of the surface with the ql axis of the UGS. This yields only an approximate value, however, because of the digitization problem mentioned later. V Method The method of standardizing the object position relative to the observer consists of two parts. The first part rotates the Gaussian image so that the center of the arc lies on the Z axis; that is, the center of the arc is at the origin of the UGS. This effectively rotates the surface so that the plane tangent to the approximate middle of the surface is parallel to the image plane. The second part performs a rotation about the Z axis so that the UGS curve is symmetrical with respect to the ql axis. For A. At this point, conical and cylindrical surfaces are easily distinguished. It is also possible to recover the ratio of height to the radius of the base in the form of the angle at the apex of the cone. In this orientation, the distance between the endpoints of the curve in the UGS is a measure of the angle of the apex of the original conical solid. For a circular cone, this angle is equal to: 2 I acos( dist / 2 ) eq. 8 -------------------------,----------------------*-- 0 I I i --------~---“---q--~---*-3--L--p--~*-~-- S--5--~-‘-,-S- I B, Fig. 3. Gaussian histograms for a cylinder in (a) an unstandardized pos i t ion and (b) the standard position. VI Resu 1 ts Data for analysis was generated by a program which, given the parameters of an object, produced the normals for the visible surface. The surface normals were produced by analytically determining the orientation of each of the surface patches as sampled through a 30 by 30 grid in the image space. A variety of cylindrical and conical surfaces at various orientations were manufactured. The .resulting sets of normals were processed using the method described above. In all cases, the surfaces were brought either to, or very close to, a standard orientation. Figures 3 and 4 show typical Fig. 4. Gaussian histograms for a cone in (a) an unstandardized position and (b) the standard position. 57 results for a circular cylinder and a circular cone. The differences between conical and cylindrical surfaces are readily apparent. For conical surfaces, equation 8 was used to determine the angle of the apex. Results are illustrated in table 1. As can be seen, the computed values are within thirteen percent of the actual values calculated analytically. Note the robustness of the method; consistent values are obtained regard 1 ess of the starting orientation of the surface. Table 1. Apex angles calculated for various circular cones at different orientations, using equation 8. All values in degrees. Actual Z X Computed Angle Rotation Rotation Angle 53 2: 67 90 90 110 110 127 127 :i 45 -45 0 0 0 -20 ;: 20 1; -45 0 45 0 -20 :: 61 59 76 76 96 96 112 112 134 134 vi I Concluding Remarks c51 The UGS image is formed by quantizing and histograming the normals. As Dane and Bajcsy [3] have pointed out, a major problem 1 ies in quantization effects. Resolution will determine whether a curve in the UGS is seen as a point, a smooth curve or as disjoint points. Dealing with one surface at a time, however, allows the possibility of adjusting resolution. Our method makes use of the symmetrical nature of the surfaces apparent in the standard orientation. Depend i ng on the initial orientation of the surface, however, it may appear somewhat asymmetr i cal , resulting in a final orientation that is not quite equivalent to the standard defined above. A variety of approaches were used to counter this difficulty, but it has not been completely eliminated. This does not appear to be a major problem as the difference between the ideal orientation and that obtained is only a few percent. simi lar surf aces at different orientations result in distinct traces . This fact could be used to aid in segmentation [3]. Finally, there is much more information which has yet to be exploited. Flatter surf aces yield a more condensed image in the UGS. The parameters of the elliptical curve in the UGS reflect on the curvature of the generating surface. The counts in the cells of the Gaussian histogram concern the relative size and/or rate of curvature of a surface. These are all matters for further investigation. Cl1 Dl c31 c41 C61 171 L81 c91 REFERENCES Clocksin, W.F. “Determining the Orientation of Surfaces from Optical Flow.” Proc. AISB/GI, Hamburg (1978), pp. 93-102. Coleman, E.N. and Jain, R. “Obtaining 3- Dimensional Shape of Textured and Specular Surfaces Using Four Source Photometry.” Computer Graphics and Image Processing 18 (1982), PP. m-328.- Dane, C. and Bajcsy, R. “Three-Dimensional Segmentation Using The Gaussian Image and Spatial Information.” Proc. PRIP-81 (1981). Horn, B.K.P. “Sequ i ns and Quills- representations for surf ace topography. ” A.I. Lab., M.I.T., A.I. Memo 536 (1979). lkeuchi, K. “Recognition of 3-D Objects using the Extended Gaussian Image.” Proc. IJCAI-81 (19811, pp. 595-600. lkeuchi, K. “Determining Surface Orientations of Specular Surfaces by Using the Photometric Stereo Method.” IEEE Trans. m, 3 (1981). pp. 661-669. Kender, J. “Shape from Texture.” Dept. of Computer Science, Carnegie-Mellon University, ~~~-~~-81-102 (1981). Prazdny, K. “Egomotion and Relative Depth Map From Optical Flow.” Biological Cybernetics, 36 (1981) pp. 87-102. Smith, D.A. “Us i ng Enhanced Spherical Images.” A. I. Lab., M.I.T., A.I. Memo 530 (1979) - [IO] Woodham, R.J. “Reflectance Map Techniques for Analyzing Surface Defects in Metal Castings.” A. I. Lab., M. I .T., A I -TR-457 (1978) . When dealing with singly-curved surfaces, it may not be necessary for the input to contain only one surface. Different types of surfaces or 56
1982
36
130
SPATIALANDTEMPORAL REASONING IN GEOLOGIC MAP INTERPRETATION Reid G. Simmons l’hc Artificial lntclligcncc Laboratory Massachusetts Institute of’I’cchnology Cambridge, MA 02139 A BS’I’RACT 11 MAP IN’l‘l’lil’liE1‘A’I‘ION In this paper. WC dcscribc a way of extending and combining scvcral Al tcchniqucs to attack a class of problems cxcmplificd by a problem known as geologic map intcrprctntion. WC USC both a dcGlcd and an abstract m!)dcl of clcmcntllry geology. combined with both local and global reasoning tcchniqucs to achicvc the system’s cxpcrtise. In particular, a new technique called im@Jing allows us to find global inconsistcncics in our hypothcscs by causally simulating a scqucncc of “instructions”. Imagining makes USC of both our detailed and abstract models of the world. I lNI’l~Ol~U@l’lON A rcccnt trend in cxpcrt systems rcscarch has been toward systems which reason from descriptions of causnl processes in the domain, rather than from the surface cffccts. ‘I’his is the “causal model” versus “cinpirical association” distinction prcscntcd in [2]. Typically, this involves creating a dctailcd model of the physical proccsscs which undcrly the domain. ‘I’his model must support infcrcnccs about how the processes affect the world, and how they interact L+ ith CHIC another. One problem with such models is that they arc often too complex to allow the rclcvant infcrcnccs ro bc mndc in a reasonable amount of time. Hcncc. rcprcscntntions and methods of reasoning \shich abstract or compress derail arc very useful for much of the problem sol\ ing process. ‘I’hc full model may bc used when thcsc less dct;lilcd methods fitil. but that should occur infrcqucntly in the problem solving process as a whole. ‘lhis rcscarch cxplorcs the issues of reasoning from both a dctailcd and an abstract model of the domain, using as a tcstbcd a problem from geology known as map intcrprctation. In this paper, we outline tbc types of geologic knowlcdgc nccdcd to rcprcscnt and reason about the geologic cm~ironmcnt. We illustmtc the USC of both a detailed model of geologic proccsscs and a more abstract model employing diagrams. WC also discuss the types of reasoning nccdcd to solve the problem. with particular emphasis on a I&W technique called imagining. Imagining allows one to “visualize” the effects of a scqucnce of proccsscs. and detects any global inconsistcncics in that sequence. In geologic map intcrprctation, one is given a diagram rcprcscnting a vertical cross-section of a region, plus a legend identifying the various rock formations. l‘he problem is to rcconsn-uct a plausible scqucnce of geologic events which could have formed that region. A simple map cxamplc is shown in Figure 1. cl ndic igneous El - shale - - q jr,!:: schist ’ I,‘, Fig. 1. Simple Ccologic Rlap lntcrprctation Prohlctn A geologist would approach this problem by noting that, since UK m;~fic igneous crosses the schist,* ’ It intruded through (i.c. forced its way through) the schist and IICIICC is younger. ‘I’hc geologist would USC the same reasoning to infer that the mafic igneous also intruded through the shalt. ‘l’hus the sh;dc and the schist wcrc both in place bcforc the mafic igneous intruded through them. ‘1’0 dctcrmine in what order the schist and the sMc appcarcd. the geologist would infer that, since scdimcntar!’ deposits arc dcpositcd from above onto the surface of the F:nrth. the sh:~lc was dcpositcd on top of the schist. ‘I’hc schist was crcatcd from existing rock by the process of mctnmorphism. Howcvcr, mctrrmorphism occurs in rocks buried deep in the IGIrth and deposition occurs on the surface. so somehow the schist must have gotten from the depths to the surface to be dcpositcd upon. A combination of the proccsscs of rrplifi and erosion would suflicc to bring the schist to the surface. ‘Hius. the final scqucncc of cvcnts is: 1. Metamorphism of schist 2. LJplift and erosion of schist 3. Deposition of shdc on schist 4. Intrusion of mafir igneous through schist and shale Solving the map intcrprctation problem is typically taught in introductory geology courses. This indicates that the problem is solvable with an clcmcntary knowledge of geology, togcthcr with Ihis work was supported in part by a Graduate Fellowship from the National Scicncc Foundation. * Our geologic knowledge includes the facts that schist is a merumorphic rock. mafic qycous is rpneolrs. and shalt IS a sedrmenruty deposit. 152 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. some common sense physical knowlcdgc. 11~2 fact that only clcmcntary geologic knowlcdgc is nccdcd mcans that many of the details of a complete model of geology need not bc rcprcscntcd. On the other hand, the domain is f;lirly complex bccausc one must reason about continuous proccsscs and their interacting cffccts on an cn\iro&cnt. WC feel that since the domain is both bounded and cotnplcx, it is a good problem choice for studying the USC of imagining and the USC of dctrlilcd and abstract mod&. I11 1’Yf’l’S OF KNOWI .HXl’ USED WC have used three basic types of knowlcdgc in solving the abo\c cxamplc: knowlcdgc about the nature of geologic processes, temporal knowledge to crcatc scqucnccs of cvcnts. and knowledge of diagrams as abstrilctions of the geologic model. A. Gcolocic Process KnowledPc Geologic process knowledge forms the basis of the geologic model, which is used throughout the problem solving process. WC will need to reprcscnt basic geologic knowledge such as “schist is a metamorphic rock”, and knowlcdgc about proccsscs that would allow us to infer. for cxamplc, whi)t cffcct erosion would hail 011 the cnvironmcnt. WC also need to rcprcscnt how proccsscs interact. For insLlncc. in step 2 of the Cxilmplc, uplift and erosion happen simultaneously. WC intend to use the Qualitative Process ‘lhcory [4] as the basis of our approach bccausc it cnablcs us to rcprcscnt quali~ti~c knowlcdgc about proccsscs and knowlcdgc about how those proccsscs interact. WC need to reason about the geologic proccsscs in both a s~trr/zc~ic and a~l~~ric mode. In the synthetic mode we reason from tllc causes, simulating proccsscs to dctcrmine their effects on the cn\ironmcnt. In the analytic mode we reason from the cffccts, analyzing the diffcrcncc bctwccn two situations to hypothcsizc the cxistcnce of a geologic process which could account for those diffcrcnces. R l‘cmr>oral KnowlcdPc Iherc arc a number of diffcrcnt problems in creating scqucnces of geologic cvcnts. For example, WC need a mechanism for composing geologic proccsscs. We alsl) need to formalize what causes an attribute of an object to change and to reason about enduring artributcs - those whose lalucs do not change over a particular temporal inrcrval. WC have not yet done substantial research on these topics, but we bclicvc that the Qualitative Process ‘l’hcory, with its idea of a hislor~~, is a step in the right direction. C. Dinrramtnatic Knowledge In this system, diagrams arc used as an abstraction of the _rcologic model. l)iagrams facilitate spatial infcrcnccs for two reasons. First they abstract away irrclcvant dctiiils, such its internal structure of fomlntions. Second. unlike other rcprcscntalions of the same knowlcdgc, diagrams arc spatially organi/.cd, that is. the position of an cntiry in space is directly rclatcd to its position in the diagram. The ndjnccncy, position, and orientation of geologic fcaturcs such as formations can be readily inferred by “viewing” the diagram. A key tcchniquc in this rcscarch is to rcprcscnt the cffccts of gologic proccsscs as diagrammatic transfomlations. For instrmcc, the process of erosion can bc visunlizcd in a dingr;un (Figure 2) by drawing a horizontal line at the Icvci of the erosion. critsing all fices which arc above the line, and erasing the line whcrc it cuts through air. l~ccausc geologic proccsscs primarily have spatial cffccts, the diagrammatic transformations can bc more easily carried out than the corresponding infcrcnccs in the geologic model. ‘I’hat is, to ascertain the cffccts of geologic proccsscs. it proves to bc niorc cfficicnt to transform the diagram and “view” it than it is to rci1son about the proccsscs directly. Fig. 2. Diqg-aiimatic IXccls of Erosion ‘Ihis diagram transformation facility has hccn implcmcntcd. The underlying rcprcscntation for the diagram is based on the wing-cdgc structure of [I]. This rcprcscntation facilitates gcomctric and topological inferences. The inrcrfacc 6ctwccn the diagram sub-system and the rest of the system is based on a small vocabulary of terms such as fnccs. edges. sidcdncss relations bctwccn them (e.g. above. Ieft-ctJ and adjacency relations (c.g. which lines make up a face, which face is on the orhcr side of an cdgc and a face). ‘I‘hcre arc also transformation commands, such as draiving a line, rotating the diagram, or merging two facts. IV T\‘l’l3 01; RI:ASONING -- ’ One way to view this problem is as a standard starch. WC have an initial st,ltc of the Morld (c.g. the “void”). and operators to be applied (i.e. all possible geologic proccsscs). it bccomcs clear in this view that unconstrained forward starch is not a plausible approach. A. Scenario M:itcliing In order to make the problem tractable, WC reason backwards from the cffccts of processes to their causes using i1 proccs5 wc call scmnrio ttlalcltittg. A scenario is ~1 pair consisting of a diagrammatic pnttcrn and a set of scqucnccs called itrterpwm~iotts which could have caused that pnttcrjl. l;or cxamplc, in solving the cxamplc in Figure 1 WC used the following sccnsrio twice: pallcm it~lcrprctaliorr <rock> igwous <rock> igneous intruded through the <rock> ‘I’hc patterns rcprcscnt local cffccts of proccsscs and involve the boundaries bctwccn two or three formations. An itz/crmVnliot~ is a scqucncc of cvcnts \vhich is a possible CaUSiil cxpl,ination for the pnttcrn’s occurrcncc. I3ch pattern may ha\~ scvcral plausible intcrprctations. Note that scenarios affccr a translation from the prcscntation language (diagrams) to the causal language (geologic processes). 153 Ily matching scenario patterns throughout the diagram and combining the local interpretations, WC gcncmtc scqucnccs which purport to explain how the region was formed. However, these scqucnccs might not be valid cxplaniltions for two reasons. First, local consistency does not iml~ly global consistency. Second, the evidence for the occurrence of some physical proccsscs might no longer exist in the geologic record (i.e. the diagram). For instance, thcrc is no c\idcncc in Figure 1 for the occurrence of the processes of uplift and erosion of the schist, bccausc the erosion has removed whatc\er once covcrcd the schist. ‘1’0 dctcct both types of inconsistcncics, a method of global reasoning is ncedcd. B. Imagining We arc dcvcloping a ncb technique called iwngirrirrg to handle this situation. An outgrowth of the notion of twrCorritlg [3]. imagining takes as input an initial cnvironmcnt, a goal stntc which is the final cnvironmcnt (a diagram), and a scqucncc of geologic proccsscs. The in@ncr simulates each of the proccsscs in turn, producing a final cnvironmcnt which is then compared with the gl>al cnvironmcnt to scc if they qual itativcly match. Although WC noted thilt uncorrs,rcrir& for\+nrd search is not practical in this domain, imagining. by using a scqucncc of plausible operators. constrains the search suffcicntly to avoid the combinatorics problem. The imagincr must dctcrminc if each process of the scqucncc, vicwcd as an opcmtor. can bc npplicd to the current cnvironmcnt. If the imagincr cannot continue, it returns 311 CXplilll~tiOll of the prohlcm cncountcrcd. This cxplnnation consists of the process which the imagincr could not simulate. and the diffcrcncc bctwccn the state which would bc ncedcd in order to simulntc thnt process and the state actually produced by simul~ring up to that process. This imagining process should suffice to dctcct both types of inconsistencies mcntioncd above. ‘Ihc imagincr works by transforming diagrams in accordance with the diagrammatic intcrprctation of the geologic processes outiincd in the prcyious section. 11crccting if a geologic process can bc simulated involves infcrcnccs from both the geologic and diagrammatic models. For instnncc, if the geologic process was “deposit A on IY’, the geologic model would bc chcckcd to see if A is a sedimentary rock, and the diagram would bc checked to XC if B is on the surface.* The imagincr must bi: able to infer the parnmctcrs of the geologic proccsscs. The description of the processes in the input scqucnce is qualitative, but quantitative paramctcrs arc needed to do the diagrammatic transformations. In addition. these parameters must bc approximately correct if the imagincr is to produce a final diagram which is similar to the goal diagram. For cxan~plc. in order to simulate “deposit A on IY the imngincr would hnvc to know the Mpidth of A, at least within some dcfinitc range. ‘1%~ imngincr uses mcasurcmcnts taken from the diagram, plus knowlcdgc of geologic + The use of diagrams is not intnnsx to the concept of imagining. which merely unplics causal smlulatlon of a scqucncc of “instructions”. ‘lhc unagmcr could work on the pcolog~ model alone. 154 processes to dctcrrninc thcsc paramctcrs. For instance, from Figure 1, WC can mcasurc the width of the schist deposit, and since we also know that part of the schist was Lltcr crodcd away (in step 2), the original width of the formation was greater (by sonic unknown amount) than the mcasurcd width in the diagram. C. Gap Filling If the imagincr dctccts a “gap” bctwccn the state nccdcd for sonic process to occur and the actual state of the cnviromncnt,* we need to hlpothcsilc some scqucncc of cvcnts to fill the gap. The imngincr indicarcs why it could not continue in terms of the diffcrcncc bctwccn two states, and from that one can reason about \vhich process or scqucncc of processes would have the cffcct of minimizing or eliminating that differcncc. ‘I‘his is mcuns-end analysis [5] used in a restricted context. V CONCI .LJSION WC have dcscribcd a way of cxtcnding and combining several AI tcchniqucs to attack a class of problems cxcmplificd by the map intcrprctation problem. WC have used both a dct;lilcd ahd an abstract model of clcmcntary geology, combined with both local and global reasoning techniques. In particular, imagining allows us to find global inconsistencies in our hypothcscs by causally simulating a scqucncc of “instructions”. Imagining makes USC of both our dctailcd and abstract models of the world. 1 would Iikc to thank Randy Davis for his guidance and supervision. and Ken Forbus and Chuck Rich for their valuable suggestions and comments. 11aumgart, Bruce - “Gcomctric hlodclling for Computer Vision,” Stanford AIM 249, October 1974. Davis, Randall - “Expert Systems: Mhcre arc WC and where do wc go from hcrc,” AAAl Mac;lyinc, Summer 1982. DcKlccr, John11 - “Qualitntivc and Quantitative Knowledge in Classical Mechanics,” M II’ AI-‘1X-352, 1975. Forbus. Kcnncth 11. - “Qualitative Reasoning about Physical Proccsscs,” in Proc. lJCi11 7. \‘ancouvcr, Canada. August, 1981. Ncwcll, Allen and Simon. H.A. - “GPS, A Program that Simul;itcs II~llllilll ~l’lli~UgllL,” in Computers & Thourzht, cds. Fcigcnbaurn and Feldman, 1963. * as would in Figure 1. have occurred if WE had not the presence of the uplift and erosion
1982
37
131
PANDORA - A Program for Doing Commonsense Planning m Complex Situations* Joseph Faletti Computer Science Division Department of EECS University of California, Berkeley Berkeley, California 94720 Abstract A planning program named PANDORA (Plan ANalyzer with Dynamic Organization, Revision, and Application) has been developed which creates plans in the common- sense domains of everyday situations and of a Unix** Consultant using hierarchical planning and meta- planning. PANDORA detects its own goals in an event- driven fashion, dynamically interleaving the creation, execution and revision of its plans. 1. IIlt.roduction. Most early work in Al problem solving has used sim- ple control structures to work in restricted or special- ized domains {e.g., Fikes and Nilsson (1971), Newell and Simon (1972), Sussman (1975)), although some recent work has moved to common-sense domains (Rieger (1975), Hayes-Roth and Hayes-Roth (1979), Carbonell (1978), Cohen and Perrault (1979)) and more powerful control structures (Sacerdoti (1977), Genesereth (1978), Stef?k (1980)). We have developed a theory of planning described in Wilensky (1981) which suggests that the design of a plan- ning program should include: 1. Shared knowledge with a planning story under- stander. 2. Use of common-sense domains. 3. Use of hierarchical planning and meta-planning. 4. Detection of its own goals in an event-driven fashion. 5. Dynamic interleaving revision of plans. of the creation, execution and 2. PANDORA - Commonsense Planning in Complex Situations. PANDOR,A {Plan ANalyzer with Dynamic Organization, Revision, and Application) is a program incorporating the features described. It is implemented in PEARL, an Al programming language developed at Berkeley (Deering, Faletti. and Wrlensky, (1981) and (1982)). PANDORA uses the same planning knowledge as the newest version of PAMELA, its story understanding counterpart, imple- mented by Peter Norvig, PANDORA and PAMELA also share an inference and frame-based memory package of knowledge and routines which perform all low-level pro- cessing of input and instantiating of frames. * This research was sponsored in part by the OfTice of Naval Research under contract N00014-6&C-0732 and the National Science Foundation under grant MCS7906543. 3. Examples from Two Domains. PANDORA has been applied to two commonsense domains. PANDORA’s original domain was that of every day human situations. For example, one task PANDORA plans for involves the task of retrieving the morning newspaper when it is raining. Given the input: i’Iime0fDay (Time Morning)) - It is raining. iWeather (Object (Outside)) (Condition Raining)) PANDORA detects its normal morning goal of knowing what is going on in the world, modifies it to adjust for the rain, and produces the following plan: ; Put on a raincoat. ((PutOn (Actor (Ego)) (Object (Raincoat))) ; Go outside. (PTrans (Actor (Ego)) (Object (Ego)) (To (Outside))) - Pick up the newspaper. iG;;s,;tor (Ego)) (Object (Newspaper))) (PTrans (Actor (Ego)) (Object (Ego)) (To (Inside))) * Read the newspaper. &ad (Actor Ego) (Object (Newspaper)))) That is, PANDORA figures out that she has to put on a raincoat before going outside, something that she does not normally do to retrieve the newspaper. In order to generate this plan, PANDORA must 1. Notice that going outside this morning would get her wet and that this is a state she wants to prevent. This is accomplished by the Noticer and the Goal Detector. 2. Detect the resulting goal conflict between her plan of going outside to get the newspaper and the Stay Dry preservation goal. The Goal Detector does this. 3. Find a meta-plan which will find a way to alleviate this problem. This is done by the Plan Selector. 4. Execute this meta-plan which must find the Wear- Raincoat plan and modify the original plan. This is done by the Executor. PANDORA is also being applied to the domain of using the Unix** operating system. The Unix domain was chosen so that PANDORA could be used as a problem- solver component of the natural language Unix Consul- tant UC (Wilensky (1982)) now being developed at Berke- ley. UC will call on PANDORA whenever the question presented involves more complicated planning. l * Unix is a trademark of Bell Laboratories 185 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. For example, if we tell PANDORA that PAMELA has a new electronic mail address by asserting (MailAddress (Person PAMELA) (Where (Address (String (“kim:pamela”))))) into the data base, and later enter (Type (Actor Vi) (To Ego) (Message (String (” out of disk space”)))) when PANDORA is waiting for feedback from the write command in the editor, PANDORA generates and exe- cutes plans which result in the following steps being exe- cuted: * Edit the Addresses file. iStartUp (Program Editor)) (MakeChanges (File Addresses) (Changes (AddAddress 0))) *Attempt to write the file but fail. iType (Text (String (“:w” (FileName (File Addresses)) (Return)))))) (VerifySuccess))) ;- Reads error message from editor. - Plan to get rid of the problem. 1. Try to delete unneeded files but fail to find enough. (DeleteUnneededFiles (Actor Ego)) * Temporarily save file in temporary space. iType (Text (String (“:w” (FileName (File (“/usr/tmp/pandora”))) (Return)))))) (VerifySuccess))) - Save file more permanently. (Mail (To Ego) (Content (File “/usr/tmp/pandora”))) * Ask system manager for more disk space. &ail (TO System) (Content (Message (Request (MoreDiskSpace))))) (Quit (Program Editor)) That is, PANDORA figures out first that she should edit her address book. When during the write command, she gets an “out of disk space” error message from the edi- tor, she determines that she should try to delete some unneeded files and try the write command again. Failing this, PANDORA finds somewhere else to save the file, i.e., in temporary flle space. However, since temporary flies are not secure from removal by other users, PANDORA predicts eventual failure of this goal and finds a more permanent solution which is to mail the file to herself. This manages to save the file for the moment but does not get rid of the problem. PANDORA then generates a plan to acquire more disk space which is to ask the powers that be for more. Note that for this example the above is the sequence of actions of PANDORA, rather than a plan generated in its entirety before execution. This is necessary because PANDORA must handle a goal conflict during execution rather than before, since the problem does not arise until then. This example also requires PANDORA to: 1. Resolve a goal conflict and then retry a plan. handled by the meta-plan doing the repair. This is 3. Construct a temporary plan {that is, one likely to fail eventually) in order to carry out a more per- manent one. 4. The Overall Structure. There are three types of objects which PANDORA’s control structure must deal with: external events, goals, and plans. The top level of control is simply a loop which deals with these in turn: 1. If there is an external event to process it is dealt with. This process might potentially involve the inference system, the Noticer and the Goal Detec- tor. 2. If there is none, then plans are chosen for any unplanned goals. This is carried out by the Plan Selector and Projector. 3. If there are no goals to plan, then the plan at the top of the plan queue is executed. This is carried out by the Executor. 5. Inference, to Input. Noticing and the Goal Detector: Reacting In the rain example, PANDORA reacts to external input representing the fact that it is morning and raining outside by detecting the frames and associated goals that apply to these situations. In this first stage, all of the significant work has been done by the low-level infer- ence and frame invocation processes in conjunction with two important parts of PANDORA, the Noticer and the Goal Detector. The memory and frames package automatically watches for events that should invoke frames, choosing them based on the structure of each frame. (Exactly which pieces of a frame should be allowed to invoke it is still undecided but for PANDORA’s situations the connections have been obvious so far.) Here the “morning” frame and the “rainy day” frame are invoked. Associated with each frame which describes a situation (as opposed to an action, event or state) is a list of goals tha.t PANDORA normally has when that situa- tion arises. The morning frame includes PANDORA’s goal of knowing what is going on in the world. Also associated with each frame is a set of inference rules which might apply when that frame is active. Hear- ing that it is raining outside invokes the “rainy day” frame which includes an inference rule to the effect that if someone goes outside they will get wet. In the case of remembering PAMELA’s address, PANDORA has an infer- ence rule which says that when it hears a friend’s address, it should remember it. 6. The Plan Selector: Choosing and Installing Plans. Whenever there is no more input, PANDORA’s Plan Selector proceeds to plan for any goals it has. Consider- ing that the main thrust of PANDORA is planning, the actual planning algorithm control structure is mislead- ingly simple for most goals. For each goal, this involves: 1. Choose the normal plan, if any. 2. Check it for conflicts with other plans by projecting its effects. 3. If it is all right, install it. For example, PANDORA’s normal plan for the “find out about the world” goal is to read the morning newspaper, which involves going outside, picking up the newspaper, and returning inside to read it. Her normal plan for remembering an address it to store it in its on-line 2. Handle a failure of the normal plan for getting more disk space by tiding another plan. This is done by the Plan Selector. address book using the editor. PANDORA always uses the normal plan for a goal unless it fails in some way or causes a goal conflict with some other goal. Since the emphasis of this work is on commonsense goals and plans, I consider mostly goals whose plans are well-known and require very little thought to select. Even for the more complicated situa- tions of goal conflicts which are common, the above algo- rithm works. One reason for this deceptive simplicity is the fact that most of the planning knowledge and therefore most of the planning algorithms are represented as meta- plans which accomplish meta-goals of the planning pro- cess. Thus the meat of the planning is buried in the knowledge base which must be relatively large for this type of planning. However, although it is large, it is quite broadly applicable to many classes of specific goals. Meta-goals are treated just like any other goals. Thus, for example, PANDORA also looks for a normal plan for the Resolve Goal Conflict meta-goal detected during simulation of the Retrieve Newspaper plan (see the next section for more on this). In this case PANDORA finds and executes the meta-plan Replan which looks for a plan that avoids the conflict. Replan finds that wanting to go outside without getting wet can be accomplished by the plan of putting on a raincoat first. Replan is a meta- plan rather than just an ordinary plan because it involves using general planning knowledge to find a modification to the current plan. 7. The Projector and Noticer: Simulatirq the Chosen Plan. PANDORA gives each plan a cursory check for conflicts by simulating the plan (currently only the top level of the plan is done), recording the effects of each step in a data base of events whose occurrence are pro- jected into the future. These events are subject to the usual inference processes, but any results are recorded in this future data base. For example, in the rain example, the planned act of going outside causes a problem. The effect of this act which is asserted into the future data base is that PAN- DORA expects to be outside in the near future. This causes the inference rule from the rainy day frame to infer that PANDORA will get wet. Included in PANDORA’s knowledge base are themes {collections of goal states organized under one property of an actor (Schank and Abelson (1977)) which organize sets of states which are to be maintained. One is the Preserve Health theme which includes the requirement that PANDORA remain dry and keep well fed. For each of these goals, the Noticer is informed that if any of these is violated, the Goal Detector should be informed. The Goal Detector will examine the state and generate a preservation goal. Note that it is not good enough to simply notice that a state that was in the data base has changed. Instead, each state which is a goal state or a precondition of an intended plan is marked as being such. Then, each time a state change is inferred, such a mark is checked for and if it is so marked, the Goal Detector is informed. If the culprit is an act the Goal Detector also knows that this is a goal conflict and generates the meta-goal of resolving this conflict. Meta-goals are treated like every other goal in PANDORA -- they are stacked and the nor- mal planning process is performed on them. Before handling a goal that is generated during simulation, PANDORA currently projects all of the steps of the currently proposed plan. However, note that after the simulation, the top level control structure implies that if there are still goals to be planned, the plan cannot be executed yet. In particular, any Resolve Goal Conflict meta-goals generated during the projection process must be planned for. 8. The Ekecutor: Using the Plan. Before PANDORA executes any regular plan, all currently active goals must have been planned for. If there are none, PANDORA chooses the next plan on the queue and executes it. The execution process involves two alternative steps. If the next plan to be executed has subplans, then these are installed in the queue, sub- ject to the same projection process as plans in the origi- nal planning phase. If not, the plan is carried out by asserting its effects into data base. One exception to this is made for meta-plans. Since they are normally part of the act of planning and not just acts to be planned, they must be executed immediately (after simulation to detect conflicts). For example, the meta-plan which installs the Put On Raincoat plan in front of the PTrans in the rain example is run immedi- ately. 9. More on the Goal Detector. Goals may be detected in PANDORA during most other processes. The ways that goals are detected may be summarized as follows: 1. Most situations have goals attached to them which PANDORA needs to plan for. In addition to our com- mon morning goals, a good example of this is the set of goals which arise whenever friends come to visit. 2. Preservation goals may be detected whenever any statechange is asserted or projected into the data base. 3. Most goal interactions (both positive and negative) may be detected whenever any statechange is asserted or projected into the data base which con- tradicts a desired state from a goal or a precondi- tion of a plan. The current implementation of PANDORA concentrates on detecting this third kind of goal, arising from goal interactions. 10. References Carbonell, J. 1978. Com..ter Models of Social cznd Politi- cal Reasoning. Ph.D. Thesis, Yale University, New Haven, Corm. Cohen, P. and Perrault, R. 1979. Elements of a Plan- Based Theory of Speech Acts. Cognitive Science, Vol. 3, No. 3. 1979. Deering. M., Faletti, J., and Wilensky, R. 1981. PEARL: An Efficient Language for Artificial Intelligence Program- ming. In the Proceedings of the Seventh International Joint Conference on Artificial InteUigence, Vancouver, British Columbia. August, 1981. Deering, M., Faletti, J., and Wilensky, R. 1982. The PEARL Users Manual. Berkeley Electronic Research Laboratory Memorandum No. UCB/ERL/M82/19. March, 1982. 187 Fikes, R. and Nilsson. N.J. 1971. STRIPS: A new approach to the application of theorem to proving problem solving. ArtificiuZ Intelligence 2, 189-208. Genesereth, M.R. 19’78. Automated Consultation for Com- plex Computer Systems. Ph.D. thesis, Harvard Univer- sity. Hayes-Roth, B. and Hayes-Roth, R. 1979. Cogdive Processes in Planning. RAND Report R-2366-ONR. Newell, A. and Simon, H.A. 1972. Human Problem Solv- ing. Englewood Cliffs, N.J.: Prentice Hall Rieger, C. 19’75. The Commonsense Algorithm as a Basis for Computer Models of Human Memory, Inference, Belief, and Contextual Language Comprehension. In ‘i7beoretica.l Issues in Natural Language Processing, R. Schank and B.L. Nash-Webber, (eds.), Cambridge, Mass. Sacerdoti, E. 1977. A Structure for Blsevier North-Holland, Amsterdam. Plans and Behavior. Schank, R.C. and Abelson, R.P 1977. Scripts, Plans, Goals, and lhxierstanding. Lawrence Erlbaum Associ- ates, Hillsdale, New Jersey. Steflk, M.J. 1980. Planning and Meta-Planning -- MOLGEN: Part 2. Stanford Heuristic Programming Project HPP- 80-13 (working paper), Computer Science Department, Stanford University. Sussman, G.J. 1975. A Computer Model of Skill Acqzbi- tion. American Elsevier, New York. Wilensky. R. 1981. Meta-planning: Representing and using knowledge about planning in problem solving and natural language understanding. Cognitive Sci- ence, Vol. 5, No. 3. 1981. Wilensky, R. 1982. Talking to UNIX in English: An Over- view of a UC. In the Proceedings of the National Confer- ence on AM’ficial Intelligence. Pittsburgh, PA. August, 1982. 188
1982
38
132
A FRAMEWORK FOR DISTRIBUTED PROBLEM SOLVING Dave McArthur, Randy Steeb and Stephanie Cammarata The Rand Corporation 1700 Main St. Santa Monica, CA, 90406. ABSTRACT Situations in which several agents must interact to achieve goals present difficulties of coordination and cooperation not found in single-agent problem solving contexts. Techniques for coordination and cooperation required in group problem solving are not well understood because most AI models deal with cases in which problems are solved by a single agent. In this paper we present a framework for distributed problem solving that describes some of the expertise an agent working in an multi-agent environment must have. An application of the framework to the domain of air-traffic control is discussed. Here each aircraft is viewed as an agent that must cooperate with others to achieve a conflict-free plan. 1. The agent must be able to continuously update possibly incomplete or incorrect world models (caused by limited situation assessment or by the existence of agents that change the environment dynamically and unpredictably). 2. The agent must be able to integrate information coming asynchronously from other agents. 3. The agent must know when new information invalidates his current attempt to plan and be able to modify his plan during execution if new and inconsistent information is uncovered. 4. The agent must be able to interrupt his planning to help others acquire knowledge to achieve their goals (especially when his goals and theirs interact or conflict). 5. The agent must be able to get others satisfy his goal or play his role. to help Not all these competences are required of single-agent problem solvers, and, since most existing models of problem solving (e.g., STRIPS Dl, BUILD [Z], NOAH [3] and many others) focus on single-agent problems, little is known about how agents solving distributed problems achieve such competences. We have been attempting to understand distributed problem solving by developing computational models of agents that have these abilities. In this paper we present a view of agents in group problem solving situations that we have developed, and discuss a specific problem solver for distributed air-traffic control (DATC) consistent with this view. II A VIEW OF DISTRIBUTED PROBLEM SOLVING --- -- Using competences such as those discussed above, we have evolved a view of the important features of distributed problem solving agents that differs considerably from the view traditionally adopted in models of single-agent problem solvers. For example, many theories [l], [Z] suggest that a central feature of the single-agent problem solver, is that its activities are decomposed into separate, strictly ordered, phases of information gathering, planning, and execution. However, the above competences indicate that, in situations where a given agent is not the sole cause of change, and therefore where not all important consequences of a planned action can be foreseen at the time of planning, it is essential that the agent be able to effectively interweave information gathering, planning and execution tasks. 1. 2. 3. 4. Briefly, the main tenets of our view are: Each agent has several distinct kinds of generic tasks such as information gathering (sensing and input communication), information distribution (output communication), planning, plan evaluation , plan fixing, and plan execution. Each kind of generic task invocation (or task instance) is a process: it can be suspended and resumed; hence tasks can be interwoven without losing continuity. Each agent has a knowledge-base that represents his beliefs about other agents and their intentions, as well as information about the static environment and his own intentions. Within a single-agent, the knowledge-base is shared by all task instances, like a HEARSAY blackboard [41. by change in the knowledge-base made by a task (e.g., information gathering) while another task is suspended (e.g., planning) will be visible to the latter 181 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. 5. 6. 7. 8. 9. 10. 11. when it resumes. Thus tasks such as planning exhibit currency as well as continuity: they do not base computations on an outdated world-model. Task instances are both data-driven and event-driven. Instances of generic tasks are triggered in two ways: by sets of well-defined knowledge-base states; or well-defined events which result in changes to the knowledge-base. Tasks that are created do not immediately get executed but are enabled and may compete for processing resources. Each enabled task has a limited amount of self-knowledge, including explicit intentions and validity conditions. This information can be used to determine if a task is justified in continuing as conditions change. Thus tasks will exhibit relevance. Enabled tasks are not invoked in a fixed order, as in single-agent problem solvers. Rather the agent acts as a scheduler, reasoning about the ordering of task instances. More specifically, the agent uses a set of heuristic rules to prioritize processes representing enabled tasks. A task selected by the agent for execution is not necessarily allowed to run to completion. It is given a quantum of processing resources (time). The size of this quantum is also controlled by the agent. During the execution of a task or process (i) the task may complete, in which case it is eliminated from the set competing for resources; (ii) new tasks may be created because of knowledge-base changes effected by the running task (iii) the changes may cause existing tasks to lose their justification. After a task has consumed its allocated supply of resources (i.e., time), the agent reorders the priority of enabled tasks and selects a new one to run, in light of the conditions in the altered knowledge-base. It also eliminates unjustified tasks (if the tasks have not eliminated themselves). This procedure iterates until there are no more enabled tasks worth running. Speaking generally, then, we view the agent in a group problem solving situation as a kind of knowledge-based operating system. The view is similar to a HEARSAY blackboard model [4], except that (i) tasks are not suspended and resumed in such a model, (ii) in a HEARSAY-like model a given KS can have several different simultaneous activations, while our framework permits only one instance of a generic at a time, and (iii) the heuristic rules used by agents in our framework (to control the selection of tasks to run) typically use more knowledge of global context than do the scheduling rules in HEARSAY (which control KS invocation). Hence, the performance of our agents may often appear more goal-directed and less data-directed than comparable HEARSAY agents [S]. Our view is not a model of an agent in a specific distributed domain, but rather represents a theoretical framework for describing distributed agents or a set of guidelines for constructing a specific model. Adhering to the framework, the user still needs to provide several sorts of domain-specific expertise. These include the procedures that comprise each generic task, the triggering conditions under which a task instance is to be created, the validity conditions under which it is permitted to continue, and the heuristic rules that order the priority of enabled tasks in light of the current state of knowledge. In order to facilitate the development of our specific distributed problem solvers, we have implemented the framework in a simple task language. The task language is a set of INTERLISP functions that provides the user with a convenient vocabulary stating the required domain-specific expertise. Once stated, the task language takes care of all the specifics of task management. It insures that an appropriate task instance is enabled whenever the triggering conditions of a user-defined generic task are met. The task language also takes care of the low-level implementation of tasks as resumable coroutines, and guarantees that these processes suspend after consuming the appropriate amount of time. Finally, it handles the details of scheduling the next task to run; the user only needs to state the properties of the scheduler his application requires. (For more details on the capabilities of the task language, see [6]). By attending to the details of task creation and management, the task language frees the user to focus on the theoretically more interesting issues of designing (and debugging) rules that achieve the appropriate interweaving of tasks. To get a more concrete idea of the value of our view of distributed problem solving agents and of how the user interacts with the task language to create a particular agent, we will now discuss examples of use of the language in developing our distributed air-traffic control (DATC) system. We begin with a brief description of the air-traffic control domain. III DISTRIBUTED AIR-TRAFFIC CONTROL The domain of Air Traffic Control exhibits a number of features that make it useful for studying distributed problem solving. Our ATC system consists of several functionally identical agents, one associated with each aircraft in a rectangular (14 x 23 mile) airspace. The agents (aircraft) essentially operate in parallel. (The details involved in getting the agents to time-share on a single processor are invisible to the user). Aircraft may enter the airspace at any time, at any one of 9 infixes on the borders of the airspace, or from one of two airports. The main goal of each aircraft is to traverse the airspace to an assigned destination--either a boundary outfix, or an airport. Each aircraft has only a limited sensory horizon, hence its knowledge of the world is never complete and it must continually gather information as it moves through the airspace. Information may 182 be accumulated either by sensing or communication. Agents are allowed to communicate over a limited band-width channel to other aircraft, for purposes of exchanging information and instructions. DATC is a group problem not only because agents may help one another gather information but also because the goals of one agent may interact with those of another. Goal interactions come in the form of shared conflicts. A conflict between two agents arises when, according to their current plans, the two will violate minimum separation requirements at some point in the future. (Currently we require a separation of 3 miles and 1000 feet of altitude). When shared conflicts arise, agents must negotiate to solve them. In a crowded airspace, such goal conflicts can get particularly complex, and involve several aircraft, thus necessitating a high degree of group cooperation. Our goal has been to discover problem solving methods by which a DATC agent can eliminate shared conflicts. To define our system within the framework of the task language we must identify the tasks comprising each agent and specify the expertise associated with each task. The top-level generic tasks of each DATC agent currently include: 1. Sensing (gathering information about positions and types of other aircraft). 2. Input-communication (gathering information about routes, plans and requests of other aircraft). 3. Output-communication (distributing information about your routes, plans, and requests to others). 4. Initial plan generation (computing a reasonable path through the airspace to one's outfix). 5. Plan evaluation (finding conflicts between your plan and the plans you believe others are following; reviewing new information for consistency with beliefs about others' plans). 6. Plan fixing (using existing plans and evaluations to create new plans that avoid conflicts with others). 7. Plan execution. A. Defining DATC generic tasks and invocation - - conditions A major part of defining a generic task is stipulating the conditions under which an instance of a generic task should be created. Consider plan evaluation. We want to define the DATC agent so that an evaluation task is created when (i) the agent has a plan and, via some information gathering task, learns the plan of some other aircraft, (ii) the agent changes his own plan, or (iii) the agent believes he knows the plan of another aircraft and senses a new position for that aircraft that may not be consistent with what the believed plan predicts. In the first two cases the kind of evaluation needed Declarations such as (4) show how task creation is data-driven, how tasks insure that they are relevant as conditions change, and how tasks may be suspended and resumed. But to interweave tasks such as plan evaluation, information gathering, etc., permitting the DATC agent to perform intelligently, is "conflict detection"; in the third it is c _ "consistency checking". Using the task language, the "conflict detection" case is implemented as follows: (1) (CREATE-SUBTASK-TYPE 'Evaluation 'Agent) (2) (CREATE-SUBTASK-TYPE 'DetectConflict 'Evaluation) (3) (SET-TASK-PUNCALL 'DetectConflict '(COMPUTE-CONFLICTS Aircraft Other)) (4) (DEFINE-TASK-TRIGGER 'DetectConflict 'Evaluation '(SET-AIRCRAFT-PLAN Other <newplan>) '(Check new plan of Other for conflicts against yours) '(AND (AIRCRAFT-PLAN Other) (EQUAL <newplan> (AIRCRAFT-PLAN Other)))) (1) Establishes the generic task of plan evaluation. Evaluation can be thought of as a class object in the SMALLTALK sense [7]. Instances of Evaluation represent specific plan evaluation tasks that might be created. The second argument in (1) says that when a plan evaluation task is created it is to be a a top-level task of the agent. (2) establishes a generic subtask of plan evaluation. When triggering conditions of DetectConflict are met and an instance of it is created, the instance becomes a subtask of the current Evaluation task of the agent. Thus while the agent is a scheduler that chooses from among enabled tasks that are instances of generics such as Evaluation and Sensing, an Evaluation instance itself is a scheduler that chooses from among instances of CheckConsistency and DetectConflict. (3) associates a function call with DetectConflict. When an instance of a generic task becomes enabled, it may be selected to execute by the Evaluation task. If the task has previously executed and suspended, Evaluation knows where to resume; if this is the first time the task has been allocated processing resources, Evaluation needs to have a way of initiating the task. It does this by evaluating the function call. Note (3) presupposes COMPUTE-CONFLICTS has been defined by the user and encodes the appropriate expertise. (4) stipulates the conditions under which task instances of DetectConflict will be created and become a subtask of Evaluation. Roughly, it says "Any time you believe you know some other aircraft's plan, it is reasonable to create a DetectConflict task as a subtask of the current Evaluation task, to see if your current plan conflicts with his new one. This task is justified as long as you still believe you know the aircraft's plan and it is the new one". B. Defining rules that interweave DATC task - - - - instances 183 we still need to define heuristic rules that will order the priority of enabled tasks. Two rules currently used are: (1) (DEFINE-SCHEDULING-RULE 'Agent (if (TASK-TYPE Task)='PlanFixing and (EXISTS-TASK-OF-TYPE 'Agent 'Evaluation) then (SET-TASK-PRIORITY Task 0))) (2) (DEFINE-SCHEDULING-RULK 'Agent (if (TASK-TYPE Task)='SendReplanRequest and (EXISTS-TASK-OF-TYPE 'Agent 'PlanFixing) and (GREATERP (TASK-TOTAL-TIME (TASK-OF-TYPE 'Agent 'PlanFixing)) 5000) and (NOT (IN-IMMINENT-DANGER Aircraft)) then (SET-TASK-PRIORITY (TASK-OF-TYPE 'Agent 'PlanFixing) 0) (SET-TASK-PRIORITY Task zoo))) (1) defines a top-level scheduling rule of a DATC agent; it helps the agent decide which of the enabled top-level tasks to execute next. The rule says that if PlanFixing is enabled (because an aircraft's plan has a conflict in it), then it is a good idea not to allocate further resources to this task if there is some evidence that the conflict-status of the plan should be re-evaluated. The rationale is that the Evaluation task may have been enabled by receipt of a new plan for the aircraft causing the conflict and this plan may avoid the conflict. (2) also defines a top-level scheduling rule for the DATC agent. Details aside, its role is to decide when a given agent (aircraft) has tried "hard enough" to solve a conflict shared with another aircraft. Note that "hard enough" has a natural definition in terms of the processing resources (time) that have already been devoted to attempts at PlanFixing. If this criterion is met, the agent will use his other option in solving a shared conflict--he will ask the other conflictee to try to resolve it (by invoking the SendReplanRequest task) instead of expending more effort to try to resolve the conflict himself. Rules such as (1) and (2) are the key to the DATC agent's ability to interweave its several enabled tasks in way that is sensitive to changing conditions. Many of the rules the DATC problem solver currently employs are devoted to ordering tasks purely "internal" to the agent. These tasks, including sensing, evaluation, plan-fixing, and plan execution, often must be interwoven because of the existence of external, unpredictable, agents. However, these tasks do not directly involve those agents. On the other hand, rules like (2) reason about tasks that involve interaction (communication) with others, either in the service of one's own goal or other's goals. In short, these rules encode cooperative strategies, which are critical to any effective group problem solving activity. IV CONCLUSIONS We are developing a system for distributed problem solving in DATC using our task language and adhering to the general view of distributed problem solving agents set out above. Our aim is not to produce a fixed distributed ATC problem solver so much as to perform computational experiments to discover policies for interweaving tasks and strategies of cooperation which are sound across a variety of group problem solving conditions. We believe the development of such policies to be a central goal for DAI. We are pursuing this goal by implementing several alternative policies and strategies, then evaluating the performances of the resulting systems. In order to complete our experiments, therefore, it is necessary to be able to easily modify the strategies and policies embedded in the system. This is straightforward because, by developing the DATC problem solver in the context of our general view of distributed problem solving, we are encouraged to write domain specific strategies for agents as rules that are explicit, modular, hence modifiable. ACKNOWLEDGMENTS for ill 121 [31 [41 [51 [61 171 We are grateful to Lee Erman and Phil Klahr comments on an earlier version of this paper. Fikes, R. approach , and Nilsson, N., to the application STRIPS: of theorem A new proving to problem solving. Artificial Intelligence: 2, (3), 189-208, 1971. REFERENCES Fahlman, A planning system for robot construction tasks. Artificial Intelligence, 5, (l), l-49, 1974. Sacerdoti, E., 4 structure for plans and behavior. New York: Elsevier North-Holland, 1977. Erman, L, and Lesser, V., A multi-level organization for problem solving using many diverse cooperating sources of knowledge. Proceedings of the Fourth Joint Conference on --- Artificial Intelligence, Tbilisi, USSR, 1975, 483-490. Corkill, D., and Lesser, V., A goal-directed Hearsay-II architecture: Unifying data-directed and goal-directed control. COINS Tech. Rep. 81-15, University of Massachusetts. McArthur, D, Steeb, R., and Cammarata, S., A model of problem solving in domains with multiple active agents. In preparation, April, 1982. Goldberg, A., and Kay, A., The SMALLTALK- Instructional Manual, SSL-76-6, Xerox PARC, 184
1982
39
133
RABBIT: An Xntelligent Database Assistant Frederich N. Tou Michael D. Williams Richard Fikes Austin Henderson Thomas Malone Cognitive and Instructional Sciences Group Xerox Palo Alto Research Center Abstract We have designed and implemented an intelligent database assistant to aid the user in formulating a query. The system, named RABBIT, relies upon a new paradigm for retrieval, retrieval by reformulation, based on a psychological theory of human remembering. To make a query, the user interactively constructs a description of his target item(s) by criticizing successive example (and counterexample) instances. One of the key innovations in RABBIT is that instances from the database are presented to the user from a well-defined perspective inferred from the user’s query description and the structure of the knowledge base. Among other things, this constructed perspective prevents the user from creating semantically improper query descriptions. RABBIT particularily facilitates users who approach a database with only a vague idea of what it is that they want and who thus, need to be guided in the (re)formulation of their queries. RABBIT is also of substantial value to casual users who have limited knowledge of a given database or who must deal with a multitude of databases. 1. Introduction RABBIT is an information retrieval interface which takes a new approach to information retrieval. The design of RABBIT began with an examination of ideas borrowed from cognitive science and knowledge representation. From those ideas, a new paradigm for information retneval, retrieval by reformulation, has been developed, and a small experimental system based on that paradigm has been implemented in the Smalltalk programming language [Ingalls, 19781 on the Xerox Dolphin and Dorado personal computers [Lampson and Pier, 19801 and runs over a set of sample databases represented in KloneTalk [Fikes, 19811. The motivation for designing a new kind of database interface was the unsuitability of existing database interfaces for casual users. Some database interfaces (e.g., SQUARE [Boyce et al, 19751 and SQL [Chamberlin et al, 19761) require many hours of instruction to learn; others have a syntax which users find difficult to use and understand (e.g., the boolean expressions of DIALOG [Lockheed, 19791). Interfaces based on the relational data model [Codd, 19701 usually require the user to know in advance which tables and attributes he will be needin while users of network databases (such as , 20 6 [Robertson et. al., 19811) frequently get lost during the course of their search. RABBIT was designed to solve four problems which we conjecture to be major sources of difficulty for casual users attempting to retrieve information: (1) the user has incomplete knowledge about the descriptive terms needed to create a query, (2) the user’s own intention is only pamly articulated, (3) a considerable amount of mformation is known in the database about any given item, and hence, the presentation of that infomlation needs to be limited or structured in some way, and (4) the structure of the database(s) is heterogeneous with the result that the ‘shape’ of the database changes depending upon where one is within the database. Two techniques of human remembering which RABBIT mcorporates are descriptive retrieval and retrieval by instantiation. The basic tenet of descriptive retrieval is that people retrieve information from (their own) memory by iteratively constructing partial descriptions of the desired target item [Bobrow and Norman, 1975; Norman and Bobrow, 1979; Williams, 19811. Williams and Hollan, 1981; Retrieval by instantiation postulates that the information retrieved each iteration of the retrieval process is in the form of an instantiation, i.e., an example item suggested (e.g., analqgically or metaphorically) by the partial description [Williams, 19811. 2. Retrieval by Reformulation The basic principle underlying RABBIT is a new paradigm for information reb‘ieval elaborated from the notion of retrieval by reformulation. instantiation-retrieval by The user makes a query by incrementally constructing a partial description of the item(s) in the database for which he is searching. RABB [T provides a description of an example instance, an instance in the database which matches the user’s parlial description. The function of this example is to aid the user in articulating his tacit knowledge. The user can select the various descriptors from the example and incorporate those descriptors, or variations of those descriptors, into his partial description, thus, reformulating his initial query. This query-building process is iterative in that the user can at any time request the interface to retrieve a new example instance, one which matches the iatest version of his (partial) description, and then use the descriptors of that new image to build up his query description further. As the user builds his query RABBtT is constructing (from the partial description) a perspective from which to present the next instance. Figure 1 shows RABBIT in the midst of a retrieval interaction. The interface consists of four primary window panes. The ‘Description’ pane specifies an implicitly defined boolean expression which appears to the user as a. partial description of the item(s) he is seeking. The Example’ pane contains an example item which matches the partial description as of the last user initiated retrieval cycle from the RABBIT defined perspective. More precisely, it contains a descriotion, called the image, of an instance from some well-defined 314 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. perspective (e.g., - ~“The Little Hsi Nan Restaurant” can be viewed from the perspectives of “a place which serves food,” “an investment,” and “a business.“). The ‘Matching Examples’ pane lists instances which satisfyg; partial description as of the last retrieval cycle. Previous Description’ pane contains the description used on the last retrieval cycle which determines the perspective for presentation of the example and the list of matching examples. The example pane comm‘and pop-up menu is also displayed. The example instance mentioned above is a central element of the interface. It serves several purposes: it functions as a tern@ate, it permits czccess to additional descriptors, it provrdes semantic resolution of potentrally ambiguous terms, and it frequently serves as a cow tei-example. The example instance is a template in the sense that its presentation (the image) provides a pattern for making a query via the descriptors comprising the instance’s image. It permits access to new descriptive terms through the alternatives and describe commands Figure 1. RABBIT Screen Display 315 elaborated below. It also provides semantic resolution in that the context of a term such as the role name ‘manufacturer’ establishes and refines the term’s meaning. The role name ‘manufacturer:’ could refer to a person or a nation or a corporation. The statement ‘manufacturer: Xerox’ in the context of a description of a camp;;: product resolves a host of potential meanings. example instance is also a counterexample to the user’s intentions since it is unlikely that the first instance retrieved will be exactly what the user is looking for. Kather than simply permitting the user to express his displeasure with the counterexample and have RABBIT try to guess what is wrong with it, the system tries to encourage the user to articulate what is wrong with the instance presented. The counterexample’s simple presence serves to remind the user that his query description is incomplete or wrong and, in addition, point out the particular parts of his description which need correction or modification. Finally, since the amount of information known about the retrieved instance could be considerable, the information actually presented in the image is limited to be only that information which is known from a given perspective inferred from the query description which was the basis for retrieving that example instance. (E.g., information concerning the dinner menu or house specialty of a given restaurant would be av;lilable from the perspective of “a place which serves food” but not from the perspective of “a business.” So if the user had begun his qrlcry with the descriptor ‘Business’, then the image of the retrieved instance, even if it is a restaurant, would not, initially, include information about its dinner menu.) The current implementation of RABBIT supports a small set (5) of basic operations for creating a query description given the descriptors provided in the image of the example instance. These operations, shown in figure 1, are require and prol-ribit (which specify that the given descriptor is or is not to be a descriptor of the retrieved instance, respectively), alternatives (which presents the user with a popup menu of alternative descriptors to the given one), specialize (which shows the specializations of the given descriptor), and describe (which allows the user to examine a description of a given descriptor or to describe recursive1 7 (rrou, 19821 and i what that descriptor should be. Tou, Williams, Malone, Fikes, and Henderson 19821 contain a more complete discussion of the paradigm of retrieval by reformulation and the user interface to RABBIT,) This paradigm of retrieval by reformulation, in effect, defines a form of interaction by which RABBIT can ‘assist casual users in formulating queries. Much of the intelligence of RABBIT comes from control of this interaction by appealing to the conceptual structure of the database. 3. The KL-ONE Database The K L-ONE epistemology for representing knowledge [Brachman, 1979] has had a major influence on the development of RABBIT. The experimental database which RABBlT accesses is a KL-ONE network. Our long term vision is to have RABBIT as a query assistant in a local machine with a local KL-ONE conceptual structure transforming the user’s queries into acceptable forms to que;y remote databases. In a sense, what are now KL-ONE Instances will be the data objects of these remote databases. The descriptions which the user creates (the partial query description) and critiques (the instance descriptions, or images) are composed of two types of descriptors: instance classes and attribute-value pairs. Instance classes (showninboldface-%theGt of this paper) denote general classes of instances (e.g., ‘Business’, ‘City’, and ‘Entity’ are all instance classes), whereas attribute-value pairs specify the properties of a specific instance, and in particular, the relationships between that instance and other instances (e.g., the instance ‘The-L,i ttle-Hsi-Nan’ has attribute-value pair ‘location: PaloAlto’). As was mentioned earlier, instances are the items in the database. The correspondence between U-ONE concepts and RABBIT descriptions is as follows. The instance classes comprising RABBIT descriptions are represented by KL- ONE generic concepts, while instances correspond to individual concepts. Attribute-value pairs are implemented as KL-ONE role-value pairs, and constraints (in the query description) on what values should be filling a attribute (called a xilue constraint) correspond to a value restriction on the correspondxcrole. In general, a query description in RABBlT corresponds to a (possibly new) generic concept which subsutnes the (individual concepts corresponding to the) instances matching that query description. The fact that KL-ONE is an inheritance network allows heterogeneous data to be easily represented in a structured format. So, for example, information common to a set of individual concepts can be factored out and associated with a generic concept subsuming those individual concepts, whiJe the individual concepts retain that information which distinguishes them from one another. In addition to the ability to construct perspectives, described below, we have been able to use the KL-ONE semantics to control the search for alternative descriptors when the user issues the alternatives command. The alternatives to a given instance class were originally those generic concepts which were “brothers” of the generic concept corresponding to the given instance class. But the number of brothers could be quite numerous, especially near the “top” of the heterarchy (i.e., close to the generic ‘Entity’). However, KloneTalk allows the builder of the database to partition generics into decomposition;ucq for example, subconcepts of ‘Restaurant’ as ‘ChineseRestaurant’, ‘FrenchRestaurant’, and ‘GermanRestaurant’ decomposition can b;hjltrouped under fhe ‘cuisine’, ‘PaloAltoRestaurant’, su bconcepts ltke ‘MountainVi’ewRestaurant’, and ‘LosAltosRestaurant’ can be grouped under the ‘location’ decomposition. Then if the user asks for alternatives to ‘FrenchRestaurant’, he is shown only those alternative concepts which belong to the same decomposition as ‘FrenchRestaurant’. 4. Perspectives One of the main uses of KL-ONE is the implementation of perspectives. A perspective is simply a way of describing an event or item from a particular viewpoint [Bobrow and Norman, 1975, Bobrow and Winograd, 1977, Goldstein and Bobrow, 1980, Goldstein, 19SO]. We introduce the notion of a dynamic perspective; thus, the perspective from which the user views the instances in the database changes depending on his partial description and on where he is within the database. In RABBIT a perspective specifies which descriptors (instance classes and attribute-value pairs) should be included in the image of any instance presented to the user. lhere are two distinct mechanisms RABBIT uses to construct a perspective. First it filters the attributes to be presented to a user by including only attributes implicitly acknowledged by the user. Since the partial description is a representation of the user’s intent to the computer, that description should be the basis for determining what information should be included in the image of the example instance. In RABBIT the attributes included in the image are exactly those attributes which belong to the instance classes occurring in the partial description, while the instance classes in the image are the specializations of the instance classes in the partial description. Thus, if one were to see the computer descibed in fi B ure 1 retrieved under the partial description ‘Product’ i.e. without the descriptor Computer’) then only the attributes ‘name’, ‘manufacturer’, and ‘cost’ would be presented. Once the user refines the partial description to specify that he is seeking a computer, additional roles (e.g. ‘disk,’ ‘CPU,’ . ..) would appear. A second mechanism for creating perspectives actually extends the perspective of any given instance beyond attributes directly held by the ob’ect. Note in figure 1 that because the user has create d an embedded description about the disk of the computer sought, aspsects of the disk that the user considers important (e.g. capacity) have been compressed into the image of the computer presented. Perspectives serve four main functions in the RABBIT interface: Perspectives are used to control the amount of information presented to the user. As we mentioned earlier, the amount of information known about any given instance in the database could be substantial. In some databases we have considered it runs to the hundreds. The presentation of that information must be limited in some fashion. Perspectives arc also useful for facilitating the user’s understanding of instances since the information provided in the image is with respect to a well-defined perspective (determined by the partial description). The set of attributes shown in the image should provide a fairly coherent and cohesive view of the example instance since that set arises from the definition of (a generic concept corresponding to) an instance class, which presumably has semantic integrity. We sometimes refer to this effect as semantic resolution. The name of an attribute standing alone might be ,a.mbiguous, but since an attribute commonly appears with other, related, attributes within some context (the perspective), the user may be able to infer the meaning of an unknown attribute from its context. A Third function of perspectives is to enforce a certain class of semantic consistency. As we said earlier, instances are always being viewed from some perspective inferred from the partial description. Consequently, the user does not have access to a particular at,tribute llnless he has first made the instance class owning that attribute a required descriptor of his partial description. Thus, only attributes which are “appropriate” ol- “relevant” to his partial description are available to the user for inclusion in the partial description. For example, if ‘Book’ and ‘Science- Fit tion’ are instance classes in the user’s partial description, then there would be no way ,for the user to add the attributes ‘employees: ’ or ‘CPU: , which are not attributes of either ‘Book’ or ‘Science-Fiction’, to his partial description. This notion extends even further in the use of embedded descriptions. For example, suppose the user sees the attribute-value pair ‘disk: Xerox-lo’ in the image of the example instance Star-8011, but does not know what the Xerox-10 disk is. One option the user has is to examine the description of the Xerox-10 using the describe command and then create a description of the disk he desires for his computer. But the description of what value should fill the attribute ‘disk: ’ can not be any description, such as the description of a restaurant, but rather, the description of an object which can be the disk of a computer. The interface enforces that constraint by imposing the constraint that the description of the disk which the user creates must include as part of its descriptors the instance class(es) corresponding to the generic concept(s) which are the value rtstrictions. Of the role corresponding to the attribute ‘disk: . In addltlon to limiting what kind of description can be created, those instance classes also provide a set of attributes which indicate to the user some of the ways in which he can describe the object, in this case, a disk for a computer. Finally, perspectives can be used to manage a non- uniformly structured database. If a database is non- uniform, then it should change as the user moves around within the database with the consequence that the kind and amount of information should also change. So as the user refines or modifies his p‘artial description by adding or removing new instance classes, he is also changing the perspective for viewing exampie instances, and hence, the attributes which appear in the image. In particular, adding new instance classes has the effect of adding new attributes to the image of the next example instance retrieved. Consequently, the user does not need to concern himself with the shape of the database as expressed in the attributes of instance classes; all he must do is decide on the conceptual shape (e.g., is the user looking for a business, a product, or something else), and the corresponding attributes are made available to him automatically. 5. Future Work We currently have a running implementation of RABBIT accessing a database of approximately 200 individual concepts and 50 generic concepts. With respect to further utilization of KL-ONE, we anticipate that KL- ONE structural descriptions are a means for supporting general constraints within query descriptions, but there are problems which need to be solved such as the operation and appearance of the user interface for creating and editing constraints. A more general issue is the question of how perspectives are created. In the current implementation, each instance class defines a perspective for viewing instances of that class. But this tight coupling between perspectives and instances seems to be too restrictive. It may be that the organization of perspectives should really be orthogonal to the organization of inslance classes. 317 A broader area requiring further work is implementing RABBIT for a “real” database, by which we mean a moderately sized database (with thousands of items) which is changing. The current version of RABBIT can not be used to modify the database, but we feel that the ideas underlyin, 0 KABBlT could be easily adapted for inserting and deleting data. And finally, RABBIT needs to be tested on actual users to determine its strengths and weaknesses-which ideas are useful and which should be modified or even discarded. 6. Summary This paper has described an intelligent database assistant named RABBIT which relies on a new paradigm for information retrieval, retrieval b,v wforrnulution, based on a psychological theory of human remembering. The four main ideas underlying this paradigm are: 1 retrieval by constructed descriptions 2 interactive construction of queries 3 critique of example instances 4 dynamic perspectives. The first three of these ideas had their origins in human psychology, but the development of the fourth idea above--dynamic perspectives-was motivated and influenced strongly by the KL-ONE knowledge representation language. One of the key ideas m RABBIT is the use of user interaction and structure of the database to construct a perspective from which to present any given instance. In RABBIT we have used perspectives as a mechanism for: The first three of these ideas had their origins in huvan psychology, but the development of the fourth t.e; above-dynamic perspectives-was motivated influenced strongly by the K-ONE knowledge representation language. One of the key ideas in RABBIT is the use of user interaction and structure of the database to construct a perspective from which to present any given instance. In RABBIT we have used perspectives as a mechanism for: --controlling the type and amount of information presented --facilitating the user’s understanding of in+nces --enforcing certain kinds of semantic consistency --organizing and managing heterogeneous data. Our experimental implementation of RABBIT looks very promising, but only usage by real users can determm? the effectiveness and usefulness of the paradigm of retrieval by reformulation. Our experimental implementation of RABBIT looks very promising, but only usage by real users can determine the effectiveness and usefulness of the paradigm of retrieval by reformulation. Acknowledgements A portion of this work was carried out by the principal author under the auspices of the MIT intern program at Xerox PARC and is based on a thesis submitted ‘in partial fulfillment of the requirements of the degrees of Bachelor of Science and Master of Science in the department of Electrical Engineering and Computer Science at the Massachusetts Institute of Technolqgy in June, 1982. l’hc authors would also like to acknowledge the onginal stimulus for this work steming from an exciting conference on artificial intclligcnce and human-computer interfaces sponsercd by the Army Research Institute. In particular, we would like Stan Halpern, Janet Kolodncr, and Albert Badre to know a part of what came from their efforts. We would also like to thank John Secly Brown, Tom Moran, Rick Cattcll, Laura Gould, and Richard Burton for their patient discussions and guidance. Each contributed crucial pieces of the puzzle we arc still putting together. Finally, we would like to thank the other members of the Cognitive and Instructional Sciences Group at Xerox PARC for their continuing support and critique throughout the development of RABBIT. References Bobrow, D.G., and Norman, D.A. “Some Princi lcs of Memory Schemata,” in D.G. Bobrow and A.M. Collins (Eds. , P and Understanding: Studies in Cogniirive Science. Keg;r$z;F Academic Press, 1975. Bobrow, D.G., and Wino rad, ‘I’. “An Overview of KRL: A Knowledge Representation % nguage,” Cognirive Science, 1, pp. 3- 46, 1977. Boyce, R.F., Chamberlin, D.D., King, W.F., and Hammer, M.M. “Spectfying Queries as Relational Expressions: The SQUARE Data Sublanguage,” Comtnunica~ions of rhe ACM 18, 11 (Nov. 1975), pp. 621-628. Brachman, R.J., Bobrow, R.J., Cohen, P.R., Klovstad, J.W., Wcbbcr, B.L., Woods, W.A. “Research in Natural Language Understanding: Annual Re 8 ort, I September 1978 to 31 August 1979,” BBN Report No. 4274. ambridgc, MA: Bolt Beranek and Newman Inc., August, 1979. Chambcrlin, D.D., Astrahan, M.M., Eswaran, K.P., Griffiths, P.P., Lorie, R.A., Mehl, J.W., Reisner. P., and Wade, B.W. “SEQUEL 2: A Unified Aooroach to Data Definition. Maniuulation. and Control,” IBM jburnal of Research and Developmenr 2’0 (Nov. -1976), pp. 560-575. Codd, E.F. “A Relational Model of Data for I.arge Shared Data Bases,” Comtnunic~a~ions of the AChl 13, 6 (June 1970), pp. 377-397. Fikes. R. “Highlights from KloncTaIk: Display-Based Editing and Browsing. Decompositions, Qua Concepts, and Active Role-Value Maps,” Procctdingings of the 1981 KI.-ONE Workshop, Jackson, New Hampshire, October, 1981. Goldstein, I.P. “PIE: A network-based personal infomlation environincnt.” Proceedings of Ihe Office Setnan~ics Workshop, Chatham, Mass., June, 1980. Goldstein, I.P., & Bobrow, D. Descriptions for a programming cnvironmcnt, Pt-cx’ecrlings oj‘ the b?t.:,t Annua/ l~atiottn! Conference on Ar~ljiciai lnlclligence, Stanford, CA, August, 1980. Ingalls, D.H. “The Smalltalk- Programming System: Design and Implementation,” Conference Record af the F$h Annual ACM Symposiutn on Principles of Programtning Languages, I’ucson, AZ: January 1978, pp. 9-16. Lockheed Information Systems, Alto, CA, 1979. Guide to DiALOG Searching, Palo Norman, D.A., and Bobrow, D.G. “Descriptions: An Intermediate T;7g;2F Memory Retrieval, ” Cognilive Psychology 11 (1979), pp. Robertson, G., McCracken, D., and Newell, A. “The ZOG A /! preach to Man-Machine Communication,” Inlernalional Journal o Man-Machine Studies (1981) 14, pp. 461-488. Tou, F. R ARBIT: A novel approach 10 infotmabon relrieval, unnublished M.S. thesis. Massachusetts Institute of Technologv. v- , Ca*mbridge, Mass., forthcoming. Tou, F.N., Williams, M.D., Malone, T.W., Fikes, R.E., and Henderson, A. RABBIT: an Intelligent Interface. Xerox Technical Report, forthcoming, 1982. Williams, M.D. “Instantiation: A Data Base 1,nterface for the yOgV;Ice User,” Xerox Palo Alto Research Center Working Paper, Williams, M.D., and Hollan, J.D. “The Process of Retrieval from Very Long Term Memory,” Cognitive Science 5 (1981), pp. 87-119. 318
1982
4
134
SCENARIOS AS AN AID TO PLANNING Rajendra S. Wall Edwina L. Rissland Department of Computer and Information Science University of Massachusetts Amherst, MA 01003 ABSTRACT In many complex domains full scale detailed solutions to problems -- plans and ways to implement them -- are not possible or practical. Rather than attempt to produce such detailed solutions, we propose an alternative of using scenarios: a set of 'lsnapshots" of the future that outline possible courses of action, actions of foreign processes, and their consequences. In this paper, we present the scenario idea and discuss its use as an aid to tactical planning. We discuss an implementation of these ideas in a Tactical Assistant to aid planning in the errand running and conflict simulation game domains and present an example illustrative of its capabilities. I INTRODUCTION In many complex domains full scale detailed solutions to problems, i. e., plans and ways to implement them, are not possible or practical. This could be due to the inherent complexity of the domain as in oil well analysis or medical diagnosis: elements of uncertainty and chance as in errand running or weather prediction: actions of agents uncontrolled by or unfriendly to the protagonist process as in chess or GO: or a combination of all of these factors. Sacerdoti 171 has suggested that in complex domains complete plans are not feasible. Rather than attempt to produce such detailed solutions we propose an alternative of creating a scenario: a set of "snapshots" of the future that outline possible courses of action, actions of foreign processes, and their consequences. This scenario can then be used to decide courses of action, outline contingency plans or force An alternative form of planning was the multi-level knowledge-source/blackboard model proposed by Hayes-Roth C31 to plan the running of errands. The model has as its base a cognitive study showing that when humans plan errand running they rarely do so in a methodical straightforward manner. Rather they seem to seize key events and decisions and fix them in the plan, building the rest of the intermediate plan around them in an "opportunistic" manner. Work has also been done by Schank t81 with various forms of memory organization to aid in completing tasks. His MClP structure is a way of relating various sequences of actions that can be used to handle differing circumstances that may confront a processor. The POLITICS system of Carbonell deals with foreign processes in the form of adversaries. He has developed a series of domain independent heuristics for performing counterplanning - planning to disrupt or impede an opponent's goals c23. These heuristics are such things as "to protect your goals, threaten the enemy's high valued goals". Analysis has been done of actual military planning in a controlled situation by Hayes-Roth [41. The study categorized the cognitive stra- tegies used by planners and the effectiveness of those strategies in terms of the correctness nf the plans produced. The three strategies discovered turned out to be very similar to those used by people who play conflict simulation games. The strategies are: analysis of problem specific information, mental simulation, and retrieval of past experience. In our work we employ the third: scenarios are generated by retrieval and modifica- tion of past experience. reexamination of goals set in the problem statement. II THE SCENARIO IDEA The issue of how to plan and act in uncertain environments was investigated by Siklossy & Dreussi C91 and Chien & Weissman Cl 1. Both suggest not planning too far into the future or in too much detail because the world state may not be what you expect when you get there. The use of scenarios allows meaningful and helpful statements to be made about such uncertain future states. A scenario is collection of projections of future events, each of which is based on a set of assumptions about the behaviors, intentions and effects of the various processes involved. Each projection depicts the course of events in this interpretation of the future - the important events or actions taken by the various processes involved. 176 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. Stated another way, the total set of possible future world states is in effect divided up into classes or categories on the basis of these assumptions about the processes involved. From each class one or more examples are chosen to illustrate the kinds of events that could occur in that class of worlds. Projections of the future are focused on the goals of the process requesting the scenario. In domains like errand running or conflict simulation games there are numerous tactics available with which to secure goals. Each tactic describes an action or set of actions designed to aid in achieving a particular goal or subgoal given a particular situation. Any goal may have numerous applicable tactics. The set of applicable tactics may change depending on the situation. Any parti- cular tactic may affect more than the designated goal favorably or adversely. Additionaly, in complex domains the tactics are not guaranteed to produce a solution. Thus for each goal a tactic or set of tactics can be chosen. A set of tactics to achieve all of the goals and the order to apply them is called a course of action. --- For any given set of goals there may be different courses of action available, each with advantages and disadvantages. A listing of a way the future occur" with actions by all inter included is called the course of - - listing may be thought of as snapshots" showing the state of the undergoes changes. The activity r creation of a hypothetical course called a projection. could "actually ested processes events. This a "series of world as is esulting in the of events is In addition to the friendly processes, there may be foreign processes operating in the domain. These may include potentially disruptive processes whose actions may prevent accomplishing the goals of the friendly processes. Foreign processes are grouped into three categories according to their intentions. A random process is one that occurs without any discernable reason. Usually these are environmental processes unrelated to any goal or purpose. A competitive process is one that may have goals that conflict with the friendly proc- esses' goals. Usually conflicts arise from mutual desires for domain resources. It is possible for competing processes to cooperate to accomplish common goals. Finally there are unfriendly or inimical processes. These are processes whose goals are always in conflict with friendly goals and whose actions can always be interpreted as hostile; this is the case in most gaming domains. In order to make a projection of the future, assumptions must be made about the behaviors and intentions of the foreign processes. The assumptions place the behaviors and intentions into categories. For example, the behavior of an unfriendly process in a given situation might be assumed to be aggressive in one projection and neutral in another. Or, the effects of a random process in a given situation might be assumed to be helpful in one projection and detrimental in another. With a given set of assumptions, foreign process actions can then be postulated. These actions are then interpreted in light of their effects on the proposed course of action under consideration. This then is the basis of scenarios: In response to a set of goals, a set of courses of action to achieve the goals is proposed. Consideration is then given to the effects of foreign processes. This consideration is along parameters whose values will represent assumptions about the effects, behaviors and intentions of the foreign processes involved. At least one projection of the future course of events that could be expected given these assumptions is the generated as an example of that possible world category. A complete scenario consists of examples, each of which is an analysis of each course of action under a representative sample of reasonable assumptions about foreign processes. A course of events example consists of a "sequence of snapshots" that make up a projection of future activity. A scenario is thus a set of analyses of courses of action within assumptions about foreign processes. A. The The scenario 1. Structure of a -- Scenario basic outline of the structure of a is as follows: Course of Action 1. Set of tactics chosen to accomplish the goals within the limitations of the given situation. A. Segment 1.1. Possible World Classification Parameters and their values: Classification of effects of, or thresholds for, random proc- esses ; Behaviors and intentions of competitive or unfriendly processes. i. Projection 1.1.1. Example of Course of Events given these assumptions: Tactics and their expected consequences; Foreign process effects. ii. Projection 1.1.2. Example of Alternative Course of Events given these assumptions. iii. . . . B. Segment 1.2. Analysis of first course Of action based on the next set of reasonable assumptions. Pro- jections of the future similar to Segment 1.1. II. Course of Action 2. achieving A different way of the goals. Projections analogous to I. Segments and III. . . . 177 In the conflict simul ation gaming dom ain, the posts a series of obj ectives for the friendly III CREATING A SCENARIO - user As mentioned above, there are three basic forces to achieve. In addition, suggestions or methods o.f Simulation, Experience. creating scenarios: Static Analysis, and Retrieval and Modification of specifications about force disposition and disper- sion, times of goal completion, and tactics can be made. The Tactical Assistant examines the know- The last method is perhaps the most interesting. This involves having a large body of ledge base experiences of experience for appropriate past These are modi fied to fit the experience in the domain. When a new situation is encountered , examined and the data base of experiences is current situation and are then presented to the user for review . Again 1 this review may prompt a situation similar to the current one is recalled. with the remembered the user to modify their objectives, suggestions situation is knowledge about what happened when or specifications, in turn generating new scen- certain things were tried - what worked, what arios, etc. didn’t, used to possible etc. The experience is then an ex illustrate the current si tuati on amp1 e and the The Tactical Assistant during could also aid execution of the suggested plan. the If futures. This is similar to user Constrained on which it remembered Example Generation method of Rissland is based [51. Differences between the something went wrong it would still have all the alternative courses of action available to recover situation and the exact current and continue towards the goal. situation indicate modifications or interpreta- tions that must be made in order to comprehend the full implications of the experience. These modi- fications produce the projections that make up the B. How Does the Tactical Assistant Work? --- scenario . The Tactical Assistant has detailed knowledge about the domain that it uses in making projections. This knowledge includes the tactics IV SOLUTION available in the domain, a detailed model of the A COMPUTATIONAL - domain, and detailed models of foreign processes In this section we introduce a means of operating in the domain. The knowledge may be in incorporating the theory of scenarios into a the form of rules for simulation, frames of problem solving aid. experience, or both. From the goal set and the current Tactical Assi stant d evelops the situation plausible the A. The Tactical Assistant courses of action. Since any particular tactic A Tactical Assistant is a form of expert system designed to help a user, who has high level strategic goals but is not immediately concerned with the low level details of a domain, choose a specific course of action. The user proposes a set of goals to be achieved within the current situation. The Tactical Assistant takes this set and generates a scenario set analysis of the courses of action that may achieve the goals, and presents it to the user. In response the user may change the goal set, redefine the situation, (e. g allocating different resources to different gills) or perhaps make suggestions about possible tour ses of action. The Tactical Assistant then generates a revised scenario. This iteration continues until the user is satisfied with the outline of the future presented (or gives up>. may affect more than its designated goal the Tac- tical Assistant must understand handle such side effects. The Tactical Assistant be able to examines the available course s of action 9 wei 0s the costs and benefits, notes any possi ble problems with foreign processes, and summarizes the courses of action as a scenario for the user. C. Functional Specifications Figure 1 shows a functional diagram of scenario generation by retrieval and modifica of experience mechanism. the tion In the errand running domain, this process can be thought of as the user first making a list: “These are the things I would like to do today”. The Tactical Assistant, with its knowledge of the town, traffic patterns, etc, generates a scenario laying out the ways to accomplish the goals given different tactical choices, noting any conflicts or problems that may arise. This scenario is reviewed by the user, who then may reschedule, postpone or cancel proposed actions, in turn causing the Tactical Assistant to revamp the scen- ario. The system through its analysis can alert the user to possible problems. This analysis is the equivalent of asking a series of “What if?” questions, for example, “What if I run out of money?” or “What if I get stuck in traffic?” Fig. 1: Scenario Generation Mechanism 178 Briefly, the major subsystems and their functions are: The Situation Analysis Subsystem examines the arrangement of friendly and enemy forces in relation to the goals of the friendly proc- ess. It then generates descriptors for each force. These descriptors are of a high level semantic nature which are used as constraints to locate experiences of similar situations. Once the relevant experiences have been recovered these descriptors are used to modify, or instantiate, the experience to fit the current situation exactly. The Constrained Example Generation Interface Subsystem directs Constrained Example Generation (CEG) system [63 to search the knowledge base. - It also includes the judgement routines needed by the CEG system to evaluate the closeness of each potentially relevant experience to the current situation. The User Presentation Subsystem consists of a -- series of graphic display routines that draw for the user each course of action from the instantiated experiences. V. SCENARIOS - AN EXAMPLE -~ Figure 2 shows a sample situation from the conflict simulation game Chickamauaa. The goal of the Confederate forces (light color) is to -remove from play the Union forces (dark color). Figures 3 through 5 show the course of action of one of the relevant experiences. Figure 6 shows the same course of action after instantiation. Figures 7, 8, and 9 show some of the other courses of action that the system has selected as being of interest. The complete scenario consists of all these projections plus some 47 others that show other tactics, assumptions about uncontrolled processes and outcomes. Fig. 2: Noted are the roads, trails and terrain present. Fig. 3: The first move of the relevant experience shows the Confederates moving and destroying the first Union force. Fig. 4: The second move shows the Union's aggressive response: to attack the rear Confederate force. Fig. 5: The third move shows the Confederates turning, encircling and destroying the last Union force. VI. Summary and Conclusions -- Scenarios are useful too1 for examining uncertain futures in many domains. In particular, our research has shown good results in two widely differing domains, errand running and conflict simulation games. The same generation mechanism was used for both these domains, leading to the conclusion that with the inclusion of the proper domain specific knowledge, many other problem areas could be handled similarly. 179 Fig. 6: The same course of (all in one diagram) modified the actual current situation. action to fit Fig. 7: An alternative outcome: the first Union force is not destroyed and together the Union forces damage the Confederates. Fig. 8: An alternative outcome: the first Union force is destroyed but the second, unattacked, escapes. Fig. 9: An alternative course of action shows the Confederates dividing their forces, unable to destroy either Union force which then escape. REFERENCES C51 Rissland, E. "Examole Generation" In Proc. [l] Chien, R. T. & Weissman, S. "Planning & Execution in Incomletely Specified Environments" In Proc. IJCAI-75 Tbilisi, Georgia, USSR, September, 1975, pp169-174. 121 Carbonell, J. G. "Planning thru Adversity: The Counterplanning Process" In Proc. IJCAI-79 Tokyo, Japan, August, 1979, PP* 124-130. C33 Hayes-Roth, B., et. al. "Modeling Planning as an Incremental Opportunistic Process" In Proc IJCAI-79 Tokyo, Japan, August, 1979, PP 375-383. C41 Hayes-Roth, B. "Projecting the Future for Situation Assessment and Planning" Note N-1600-AF Rand Corp., Santa Monica, California, Nov 1980. Third National Conference of the Canadian Society for Computational- Studies of IntelligenK(1980). - [61 Rissland, E., Soloway, E., Waisbrot, S., Wall, R "Constrained Example Generation - an Imilementation" University of Massachusetts Tech Report in preparation. C71 Sacerdoti, E. D. tlProblem Solving Tactics" In Proc. IJCAI-79 Tokyo, Japan, August, 1979, PP 1077-1085. 181 Schank, R. "Language and Memory" Cognitive Science 4, pp 243-284 (1980). C91 Siklossy, L. & Dreussi, J. "Simulation of Executing Robots in Uncertain Environments" Presented at NCC 1974. 180
1982
40
135
ERROR DETECTION AND RECOVERY IN A DYNAMIC PLANNING EZNVIRONMENT In this paper a set of techniques for error detection and recovery is proposed. These techniques augment a plan- Blake Ward and Gordon McCalla Dept. of Computational Science University of Saskatchewan Saskatoon, Saskatchewan, S7N OWO ABSTRACT ning system (the ELMER system) which already has many features for preventing execution errors but has few features for handling errors that can’t be prevented. The error handling techniques presented in this paper depend for their effectiveness on the close co-operation of the plan- ning, execution and knowledge base components of the sys- teti, and especially iii’&e use of knowledge learned from the earlier execution of other plans. i. INTRODUCTION For the past several years we have been developing a geographic planning system which takes the view that plan- ning, execution, and knowledge acquisition are inseperable components of a planner operating in a dynamic environ- ment. The ELMER system (McCalla et al [1978],[1979],[1982]) uses a unique route-splicing planning methodology to produce plans to take a taxi driver (ELMER) to a destination in a small (simulated) city (Figure 1) which contains dynamic obstacles such as cars, pedestrians, etc. These obstacles are anticipated well enough in the current system that error recovery in seldom needed. However, for some dynamic situations, errors will happen and this paper explores how the ELMER system can be extended to handle error recovery. Other research besides ours has emphasized the impor- tance of handling execution errors. HACKER (Sussman [1973]) for example produces plans and then debugs them. Srinivas [1977] categorizes errors which occur in the execu- tion of robot plans and suggests several approaches to correcting them. The incremental planning system of Hayes-Roth and Hayes-Roth [1979] suggests that people don’t produce perfectly structured plans and then execute them, but take a much more integrated approach where plans are produced and then modified and corrected through simulated execution. Other systems discuss the importance of execution monitoring and error correction, but don’t necessarily suggest tney be done automatically. One such system is the interactive planning system of Robinson and Wilkins [1980] where monitoring and execution are done by the user and not the planner. Thus the addi- tion of an error recovery capability to the ELMER system is important, and the techniques outlined here may have wider applicability than just ELMER. 2THEELbfERSYSTEM The ELMER system has three major components: the planner, the Map, and the Executor, as shown in Figure 2. TBb Exacutbr f.Bt%iVes titidom of low-level Bensorp informa- tion from the geographic microworld indicating the pres- ence of both permanent features (stop signs, street signs) and transient features (cars, pedestrians) at varying dis- tances from ELMER. It attempts to correlate this window information with the hierarchical plan it is trying to exe- cute. For example, the core plan in Figure 3, (ie. the num- bered plans headed by plan 23) represents a plan to go from the intersection Retier 8 Winograd to Schubert @ Brachman (streets in a mythical city in which ELMER “exists”). Plan 23 breaks down into sub-plans 24 and 25 (representing traversals of smaller portions of the path) and these in turn break down to sub-sub-plans 120, 121 and 122 and 123, 124 and 125 respectively. The Executor activates the left-most branch of the hierarchy (23, 24, and 120) and looks for window information which allow transi- tions (eg. Past Reiter 8 Wkograd) to be made from any of these active plans. For example, if At Reiter @ Schubert is recognized (say by the presence of an appropriate street sign), the new active branch of the hierarchy will become 23. 24, and 123. Certain predictable errors which could arise if stop signs or red lights or the like were missed can be prevented by secondary plans such as A,B,...,H which act in parallel to the core plan throughout its execution. Once the plan has been successfully executed, the plan is added as a route to the map. Routes are just plans instantiated by attaching the window seen during execution beneath the primitive plans where they were “seen”. Cer- tain information (such as speed liiits, directions, dis- tances, etc.) can be abstracted into higher level plans, as well, and information about how this new route connects ( associates or inassociates) to other routes is also added. Intuitively, a route associates into another route if it is “easy” to get from the first route to the second. At the same time the second route is said to inassociate to the first. Such associations/inassociations form the basis of the Planner’s ability to create plans. When presented with the request to “go from A to B”, the Planner first of all looks to see if the Map has a route already connecting A to B (ie. ELMER has made this trip before). Note that this route only needs to b-: ,art of a previous trip--extra portions before A or after d are ignored. If not, it looks to see if there is a route connecting some route that A associates into to some route inassociated to B. (ie. if it is easy to get to some place close to A from which it is easy to get to some place close to B). If not, associationsjinassociations at higher levels of detail are tried, and if this still fails, the Planner attempts to splice together two routes, the first of which cwLtains A, the second of which contains B and which mutu- ally intersect at some point. Once an appropriate plan has been concocted it is passed down to the Executor. 172 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. There are many aspects of the ELMER architecture which have been ignored (eg. secondary plans) since they aren’t needed for the ‘discussion to ehsue. Further dettils can be found in McCalla et al [1982]. 3. ERROR HANDiJNG Error handling in the original ELMER system was down- played as a problem. Instead, secondary plans tried to prevent errors from ever occurring by explicitly predicting certain dynamic situations (red lights, pedestrians in the way, etc.). There are many errors, though, that can’t be explicitly predicted - running into unexpected road block- ages, missing some vital road sign, using a route that doesn’t quite go as expected. These kinds of errors will occur with increasing frequency as the Map is extended to infer hypothetical routes and as realistic resource con- straints are placed on planning and execution. 3.1 Error Detection The first problem to be tackled when trying to recover from execution error is even recognizing that an error has occurred. This is non-trivial in the ELMER system (as it often is in the real world). The difficulty is in determining when a transition out of a plan has not occurred as expected. There are two basic approaches to error detec- tion: i. the Planner can explicitly add error transitions to plans, the ELMER analogue to saying “if you see the drug store on your right you’ve gone too far”; or ii. the Executor can monitor the execution of the plan watching for any of several conditions that indicate an error may have occurred. Explicitly added error transitions come from two sources. The primary source comes as a residue of the Planner’s route splicing methodology. Basically, if we wanted a plan to go from Wimp-ad @ &hank to Reiter 8 Schubert and the Map currentiy contained the routes shown in Figures 3 and 4 then the resulting plan would be that shown in Figure 5. Notice that in splicing the two routes together, the Planner has added two new transitions that indicate error condi- tions. The Plahner does have to use some care in adding error transitions since it is possible that two routes that are being spliced overlap and share a common sub-route. Since the overlapping portion is being used as a part of the final plan, it cannot be added as an error transition as well. Another source of explicit error transitions comes as a result of errors made in the exec‘utioh of a previous plan. When the old plan is being added to the )Map, it is relatively straightforward to abstract a record of the erroneous path previously traversed and tack it on to the old plan as a ti-ansitioh from the sub-plan where the error originated. The planner can then pass any such relevant error transi- tions down to the Executor so that the error can be instantly recognized in the future. If the error can’t be recognized via explicit error transi- tions, it must be detected by monitoring plan execution. One approach is to compare the windows being seen during execution to those seen on previous traversals of the same route. To accomplish this, the Executor must have access to windows (or information abstracted from windows) that the Map has kept after previous traversals of the route. As a plan is executed, each window can be matched with the appropriate small set of previously seen windows beneath the current primitive plan (or can be searched for features corresponding to more abstract information). Domain Spec%c her&&s need to be employed iii ‘or’der tb ‘hi&‘& only the relevant features. In the ELMER world, permanent features such as street signs and and buildings are impor- tant while more transient features such as other cars and pedestrians ;are\ not. If crucial features don’t match then an error probably has occurred. An alternate approach to error detection involves adding a distance attribute to each go-along box in the Map that records the total length of the stretch of road traversed by that pian box. when the distance travelled by ELMER since the transition into that box exceeds the recorded distance an error has definitely occurred. 3.2 &or kecovery Once the error has been detected, recovery must take place. This largely boils down to trying to figure out where ELMER is now. If all else fails, ELMER can ask the dispatcher, but there are several heuristic approaches to solving this problem. One approach (somewhat like the plan patching of Srinivas [1977]) suggests retracing the steps taken when the error occurred in order to get back to the original plan. In the case of explicit error transitions, this may be fairly straighforward since the erroneous path is already part of the plan and need only be reversed in order to get back on track. Of course, one-way streets may foil this attempt, so there are no guarantees even in this simple case. Retracing may also be possible .by abstracting primitive level plans from windows seen as the erroneous path was undertaken and then reversing the order of these plans. Not every window will contribute to a low-level plan nor is it always the case that a needed piece of information will be in a window, but the approach will often work. Once the path has been retraced, getting to the original destination is usually straightforward since the original plan can now be re-activated, unless, of course, road closures or the like continue to make the original plan inappropriate. Under, such circumstances or in situations where it is impossible to retrace the wrong steps taken or when such retracii-ig is ii?ipractic&jle (eg. 6iFiiig to the l’ehgth ofzn;ii-e to achieve it), other methods must be sought. One approach is to find a nearby location which is in the Map, try to get there, and then ask the Planner to re-plan a path to the destination. If the current location corresponds to an intersection known to the Map, then the problem is trivial --just re-plan from there. If not, ELMER is (in a sense) lost in that his position at the primitive levels is unknown to the Map. However, win- dows attached to the primitive level plans may be recogniz- able. Since ELMER prob&ly hasn’t gone far wrong, it is possible to look at routes associated/inassociated to the last primitive plan ELMER is known to have been in to see if windows attached to these nearby plans match what is seen at the current location. If not, then the hierarchical struc- ture of a plan can be useful since it can be viewed as a plan traversing ever larger regions as you move up the hierar- chy. Presumably ELMER is still within a region traversed at some level by the current plan (unless he has gone very wrong indeed), and this information can be used to avoid searching the Map’s entire repetoire of windows. Using this intuition, it is possible to move up the hierarchy from the primitive level to a more abstract levels. Associations/inassociations can be taken at the higher lev- els and all windows attached to primitive descendents of such associated/inassociated routes can be similiarly matched to the current location for recognizable features. The process can continue until no further abstraction is 173 possible or until a recognizable location is found. In the latter case, a plan to get from the primitive route contain- ihg the matching window to the ‘destination can be readily constructed. In the former case, the current location is simply not in the Map (even by inference) so all that is left is to try to explore for some recognizable location. Exploration needs a direction to explore in and a set of termination conditions to stop exploration. Both can be obtained from the last known location in the current plan. The direction is merely the direction of the destination relative to the last known lot ation. Presumably (although not always) ELMER hasn’t gone far enough off course to alter this relative direction. The termination conditions are obtained by taking all tran- sitions from routes near the last known location (ie. the routes generated above although the abstraction process can be stopped somewhat earlier if a smaller radius of exploration is desired). The exploration phase then proceeds with ELMER heading in the direction indicated (insofar as this is possible) until one of the transition labels matches, indicating that he is back in “known territory”. Re-planning can then occur from that location. The ability to explore turns out to be useful in other situations as well, in particular planning. If the Planner is unable to splice together two routes to form a plan, then it can stic!c an exploration sub-plan in to bridge the gap pro- vided it knows the relative direction of the sub-routes being so bridged. This is non-trivial, unfortunately, unless the Map is extended to have some sort of global co-ordinate system. Such an extension is being designed, as are other Map extensions to allow the inference of various kinds of hypothetical routes based on categorizing various areas of the city as .+ or crescent or the like (as is done in Kuipers [19?7]). B t u a discussion of these aspects is beyond the scope of this paper. In conclusion this paper illustrates the usefulness of an integrated view to the problem of recovering from execu- tion errors. The uniform structure of routes and plans is important when trying to find ELMER’s location or ter- minate an exploration. It is possible to associate/inassociate to nearby routes from the current plan which when combined with the hierarchical structilre allows a focussing on relevant Map routes. Hierarchical plan structure (absent for example from Srinivas’ [i’977] system) is thus useful in error handhng. The close co-operation of the Planner, Executor and Map is also useful. The Planner helps the Executor by providing explicit error transitions to help the Executor determine when errors have occurred. The Executor helps the Planner by being able to retrace steps and explore without needing re-planning; it also helps out by being able to exe- cute exploration sub-plans to bridge unplannable gaps. But the main interaction occurs between the Executor and the Map where the Map’s summaries learned from previous experiences, prove invaluable. The Map provides window information to the Executor to help execution monitoring and to help clctermine ELMER’s location once he gets lost. It also provides surnmaries of previous execution errors in order that the Executor can explicitly avoid these in the future. Apart from the occasional resort to domain specific heuristics (eg. in judgin, 0 the relevence of window informa- t-estricted to the geographic microworld. We are currently exploring other applications to test the generality of these techniques. 5. ACKNOWLEDGMENTS We would like to acknowledge the financial support of the National Sciences and Engineering Research Council of Canada and the University of Saskatchewan. 6. REFERENCES [l] Hayes-Roth, B. and Hayes-Roth, F. (1979). A Cognitive Model of Planning. Cogntiive Science 3, October- December.pp. 275-310. [2 II3 Kuipers, B.J. (1977). Representing Knowledge of Large Scale Space. AI Lab. AI-TR-418, MIT, Cambridge, Mass. McCalla, G.I., Schneider, P.F., Cohen, R. & Levesque, H. (1978). Investigations into Planning and Executing in an Independent and Continuously Changing Microworld. AI Memo 78-2, Department of Computer Science, University of Toronto, Ontario. [4] McCalla, G.I. & Schneider, P.F. (1979). The Execution of Plans in an Independent Ilynamic blicroworld. Proceedings: Sixih internalional Joint Conference of ArtQiciaE Intelligence, Tokyo, Japan. [5] McCalla, G.I., Reid, L. & Schneider, P.F. (1982). Plan Creation, Plan Execution and Knowledge Acquisition in a Dynamic Microworld. International Journal of Man- Machine Studies 76, pp. 89-112. [6] Robinson, A.E. & Wilkins, D.E. (1960). Representing Knowledge in an Interactive Planner. Proceedings: J%st Anmmi Nutiond Conf er-ence on Artificial liztelli- gence, Stanford, California. [7] Srinivas, S. (1977,. 1 Error Recovery in Robot Systems, CIT, Pasadena, California. [8] Sussman, G. J. (1973). A Computational Model of Skill Acquisition. AI Lab. AI-TR-297, MIT, Cambridge, Mass. lnstant!oted plons I Customer goals L Windows (“sensory’ mformotton from mlcroworld) FIG. 1. Basic system architecture. tion) most of the error recovery techniques are not 174 Schonk Stnp Kulpers Crescent FIG. 2. Simon City. .Corthy Figure 3 - I 86 Figure 4 - Another Route 7zi-G+&i* / _ !87 '00 from !Jinoarad 0 Schank' I 24 at o from ReiterO!dinograd to Winoqrad @ Reiter Winograd @ Reiter ;I/{\ / I / wi \ i / 2 '\ / 0 '1 0 : Kg ', / gi ? \, *I; -0 530 ;I (jTI# 7' ZI =! d =I 2, 31 error rl Figure 5 - Exnlicit Error Transitions 175
1982
41
136
Experiential Learning in Anafcgical Problem Solving Jaime G. Carbonell Carnegie-Mellon University Pittsburgh, PA 15213 Abstract A computational model of skill acquisition is analyzed based on extensions to an analogical problem solving method and previous Al work on concept acquisition. The present investigation focuses on exploiting and extending the analogical reasoning model to generate useful exemplary solutions to related problems from which more general plans can be induced and refined. Starting with a general analogical inference engine, problem solving experience is, in essence, compiled incrementally into effective procedures that solve various classes of problems in a more reliable and direct manner.’ 1. Int reduction Whereas humans exhibit a universal ability to learn from experience no matter what the task [14] Al systems are seldom designed to model this adaptive quality. Concept acquisition, i.e. inducing structural descriptions of non-procedural objects from examples, has received substantial attention in the Al literature [9, 7, 11, 17, 181, but with a few exceptions, the techniques developed therein have not been transferred to learning in problem-solving scenarios2 Since the process of acquiring and refining problem solving and planning skills is indisputably a central component in human cognition, its investigation from an Al perspective is clearly justified. In this paper I set out to investigate two hypothesis: Hypothesis: Problem solving and learning are inalienable aspects of a unified cognitive mechanism. In other words, one cannot acquire the requisite cognitive skills without solving problems --- and, the very process of solving problems provides the information necessary to acquire and tune problem solving skills. The second hypothesis postulates a unified learning mechanism. Hypothesis: The same learning mechanisms that account for concepf formation in declarative domains, operate in acquiring problem-solving skills and ‘This research was sponsored in part by the Office of Naval Research (ONR) under grant number N0014-79-C-0661, and in part by the Defense Advanced Research Projects Agency (DOD), ARPA order number 3597, monitored by the Air Force Avionics Laboratory under contract number F-33615-81-K-1539. The views and conclusions in this document are those of the author and should not Interpreted as representing the official policies. either expressed or implied, of the Defense Advanced Research Agency or the US Government. 2The exceptions include Anzai and SIrnon’s Learning-by-Domg Paradigm [l], Mitchell’s LEX system [12], SI HIPS with MACHOPS IS], and Lenat’s AM [lo]. formulating generalized plans. One method of testing the second hypothesis is to develop a problem solving mechanism into which one can integrate the techniques developed in concept formation --- with a resultant system that learns from problem solving experience. The analogical problem solvmg method discussed below provides a framework for automated example generation that enables one to apply learning-from-examples techniques in order to acquire generalized plans. First, I review the basic analogical problem- solving process, and subsequently I discuss the natural incorporation of the experiential learning component. 2. Solving Problems by Analogy Most human problem solving occurs in a wealth of past experience, whether it be in mundane day-to-day situations, or in the pursuit of skilled tasks, such as structural engineering or medical diagnosis. Hence, past problem solving experience may play a crucial role in both commonplace and expert behavior. The questlon then becomes: How can a problem solver benefit from experience? That is, how can one solve problems faster, more directly, and with more self assurance simply by having solved similar problems in the past? Or, in more operational terminology: How is know/edge of past problem solving behavior recalled and transferred to new problems in similar situations? I have addressed the issue of recalling past problem-solving episodes that bear strong similarity to new problems by postulating adaptable, weighted similarity criteria [5,3] exploiting a memory organization scheme along the lines of Schank’s MOPS [16]. Upon recalling similar problem-solving experiences, aspects of the recalled information must be transferred to facilitate and guide the problem solving process. One method of bringing past problem-solving knowledge to bear in new scenarios is based on the analogical problem solving engine [2,4], summarized below. The operational definition of analogy I adopt involves a reconstructive mapping, as evidenced in the following discussion, rather than Winston’s more structural approach [19]. Consider a problem space similar to that of GPS [13] or STRIPS [8], in which problem solving occurs by standard Means Ends Analysis,3 with the added feature that the solution to any given problem must obey a set of path constraints, i.e. global predicates on the sequence of operators that comprise a 3 The reader not familiar with Means Ends Analysis is encouraged to review the teChniqlJe 111 any standard Al text. such as Wmston’s 121, or In the much more thorough treatment in [13]. Artdlcial [20], in From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. solution. Now, instead of solving problems in the original problem space in which the states are descriptions of the external world, consider solving the problem by starting with a solution to a similar problem and transforming it into a solution for the new problem. That is, the analogical transformation space consists of states that are themselves complete solutions, and operators that consist of incremental transformations among the set of potential solutions. More explicitly, the analogy transformation space (T- space) is defined by: l The initial state is the recalled solution to a past problem that bears strong similarity to the current problem. l The goal state is a specification of the solution to the new problem in terms of its original-space initial state, goal state, and path constraints. Original Space (Retrieved Solution) T-Space . Figure 2- 1: A solution path in the original problem space l The transform operators perform all manner of useful edits on a given solution sequence --- such as subgoal-preserving substitutions, splicing in additional steps, deleting redundant steps, performing global parameterization, etc. l The new T-space difference function is given by DT = <DoiS, ,,S, 2), D&PC, ,PC2), D,(S, ,S, *), D&L, ,$OL2)> Do is the difference function between states in the original space. D, computes differences between path constraints (PC’s). D, measures the applicability of the old solution in the new scenario by determining the fraction of operators in the initial solution sequence (SOL,) whose preconditions are not satisfied under the new problem specification. S, denotes an initial state, and S, denotes a final state. The subscript 1 indexes the retrieved solution, and 2 indexes the specifications on the desired solution to the new problem. D, is reduced when any of its four components is independently reduced. The problem- solving process in T-space succeeds when D, = <NIL, NIL, NIL, NIL>. l A difference table indexes T-operators as a function of the remaining T-space differences they reduce between the current solution sequence and the specifications defining the goal state sequence. becomes a state in the analogy transform space. Graphically, the MEA Transform Space is depicted in the figure below. than identical to one or more previously solved problems.4 Hence, simply storing solutions to new problems constitutes a form of learning --- as these can serve as a basis from which sofutions to yet newer problems may be analogized. However, there are other aspects to learning that present more interesting challenges. TO wit, if a type of problem recurs with sufficient frequency, a human planner is apt to formulate a generalized plan for dealing with future instances of that problem, rather than reasoning analogically from a particular member of that cluster of similar experiences. A generalized plan is, in essence, similar to Schank’s notion of a script [15,6], i.e., a parameterized branching sequence of events with expected goals and default actions. How is a generalized plan acquired from past problem solving experience? Consider an inductive engine, such as those developed to formulate generalized concepts from sequences of positive and negative exemplars of the target concept [9, 17, 18,7, 111. Instead of acquiring disembodied concepts from an external teacher providing training sequences of exemplars labeled “positive” or “negative”, in experiential learning the exemplars consist of analogically related past problems and their respective solutions. The concept learned is a generalized plan for solving problems of that type, where the “type” is not artificially defined by an external teacher, but internally defined by clusters of solutions derived analogically from a common ancestor. More specifically: l Whenever the analogical problem solver generates a solution to a new problem, that solution is tested in the external world. If it works, it becomes a member of the positive exemplar set, together with the prior A more detailed discussion of this analogical problem solving solution from which it was analogized and other method is presented in [2,4]. Reiterating the basic idea: solutions to problems from the same analogical root. analogical problem solving proceeds by recalling the solution of a similar problem that worked well, and then transforming that solution to fit the requirements of the new problem at hand. l If the analogized solution fails to work, the cause of the failure is stored and this solution becomes a member of the corresponding negative exemplar set. 3. Learning Generalized Plans The analogical transformation process provides a method of exploiting prior experience in a flexible manner. That is, it requires only that the new problem be structurally similar rather 4The MACROPS facility In STRIPS required corresponding initial States and goal states to be identlcai modulo parameterization of operators in order to reuse portions of past solution sequences [8]. 169 l The positive and negative exemplar sets are given to the formation of generalized plans. Parts of the plan an induction engine that generates a plan generalization process are currently being implemented to test encompassing all the positive solutions and none of the viability of the proposed knowledge acquisition method; the negative exemplars. Thus, past experience preliminary results are encouraging. Although, much of the solving similar problems provides the training theoretical and experimental work in acquiring problem solving sequence, rather than an external teacher. And, the skills is still ahead of us, there is sufficient evidence to support concept acquired is a generalized solution procedure the two original hypotheses: the integration of learning and rather than the description of a static object, as is problem solving methods, and the utility of the learning-from- typically the case in the concept acquisition examples technique for acquiring planning skills as well as more literature. static concepts. l Moreover, negative exemplars are near-misses,5 since the analogical process generated them by making a small number of changes to known positive instances (i.e., transformations to past solutions of the same general problem type, retaining the bulk of the solution structure invariant). Hence, near-miss analysis can point out the one or two discriminant features between positive and negative exemplars of the general planning structure under construction. In other words, the problem solver serves as an automated example generator, that produces near- misses as a side effect when failing to generate an effective plan. References 1. Anzai, Y. and Simon, H. A., “The Theory of Learning by Doing,” Psychological Review, Vol. 86, 1979 , pp. 124-140. 2. Carbonell, J. G., “A Computational Model of Problem Solving by Analogy,” Proceedings of the Seventh International Joint Conference on Artificial Intelligence, August 1981 , pp. 147-152. 3. Carbonell, J. G., “Metaphor: An Inescapable Phenomenon in Natural Language Comprehension,” in Know/edge Representation for Language Processing Systems, W. Lehnert and M. Ringle, eds., New Jersey: Erlbaum, 1982. l The same generalization process used on the solutions can be applied to the problem descriptions corresponding to each solution. Thus, conditions of applicability are generated for each generalized plan. 4. Carbonell, J. G., “Learning by Analogy: Formulating and Generalizing Plans from Past Experience,” in Machine Learning, R. S. Michalski, J. G. Carbonell and T. M. Mitchell, eds., Palo Alto, CA: Tioga Pub. Co., 1982. l Finally, in cases where the analogical problem solver fails to generate a solution for the new problem (as opposed to generating an erroneous solution that becomes a negative exemplar for the generalized plan formation process), different information can be acquired. The situations where a solution was recalled and a plan was formed analogically (independent of whether the plan worked) serve as positive exemplars to reinforce and perhaps generalize the similarity metric used to search memory. The cases where a recalled solution could not be analogized into a candidate plan for the new problem suggest that the old and new problems differed in some crucial aspect not adequately taken into account in the similarity metric, and thus serve as negative exemplars to refine and constrain the similarity criterion. 5. Carbonell, J. G., “Invariance Hierarchies in Metaphor Interpretation,” Proceedings of the Third Meeting of the Cognitive Science Society, August 1981 , pp. 292-295. 6. Cullingford, R., Script Application: Computer Understanding of Newspaper Stories, PhD dissertation, Yale University, Sept. 1977. Dietterich, T. and Michalski, R.. “Inductive Learning of Structural Descriptions,” Artificial Intelligence, Vol. 16, 1981. Fikes, R. E. and Nilsson, N. J., “STRIPS: A New Approach to the Application of Theorem Proving to Problem Solving,” Artificial Intelligence, Vol. 2, 1971 , pp. 189-208. Thus, we see that analogical problem solving interfaces naturally with a learning-from-examples method in that it provides an internal example generator requiring no external teacher. Presently, I am extending the problem solving engine to extract and use information from the planning process itself (not just problem descriptions and corresponding ’ solutions), such as viable alternatives not chosen, causes of failure to be wary of in similar situations, etc. with a view towards acquiring, or at least refining , the problem solving strategies themselves, in addition to Hayes-Roth, F. and McDermott, J., “Knowledge Acquisition from Structural Descriptions,” Proceedings of the fifth International Joint Conference on Artificial Intelligence, 1977 , pp. 356-362. Lenat, D., AM: Discovery in Mathematics as Heuristic Search, PhD dissertation, Stanford University, 1977. Mitchell, T. M., Version Spaces: An Approach to Concept Learning, PhD dissertation, Stanford University, December 1978. 5 b’hStOn [18] defines a near-miss as a negative exemplar that differs from POSitW t?XemPlarS in one or two slgniflcant features. Near misses are crucial in isolattng defining characteristics of a concept in the learning-from-examples paradigm. 7. t 9. 10. 11. 12. 13. Mitchell, T. M., Utgoff, P. E. and Banerji, R. B., “Learning Problem-Solving Heuristics by Experimentation,” in Machine Learning, R. S. Michalski, J. G. Carbonell and T. M. Mitchell, eds., Palo Alto, CA: Tioga Pub. Co., 1982. Newell, A. and Simon, H. A., Human Problem Solving, New 170 Jersey: Prentice-Hall, 1972. 14. Newell, A. and Rosenbloom, P., “Mechanisms of Skill Acquisition and the Law of Practice,” in Cognitive Skills and Their Acquisition, J. R. Anderson, ed., Hillsdale, NJ: Erlbaum Assoc., 1961. 15. Schank, R. C. and Abelson, R. P., Scripts, Goals, Plans and Understanding, Hillside, NJ: Lawrence Erlbaum, 1977. 16. Schank, R. C., “Language and Memory,” Cognitive Science, Vol. 4, No. 3, 1980, pp. 243-264. 17. Vere, “Inductive Learning of Relational Productions,” in Pattern-Directed Inference Systems, Waterman and Hayes-Roth, 1978, eds., New York: Academic Press, 1978. 18. Winston, P., Learning Structural Descriptions from Examples, PhD dissertation, MIT, September 1970. 19. Winston, P. H., “Learning and Reasoning by Analogy,” CACM, Vol. 23, No. 12, 1979, pp. 689-703. 20. Winston, P., Artificial Intelligence, Reading, MA: Addison Wesley, 1977. 171
1982
42
137
A PROGRAM THAT LEARNS TO SOLVE RUBIK’S CUBE Richard E. Korf Dcpartmcnt of Computer Science Carncgic-Mellon University Pittsburgh, Pa. 15213 Abstract This paper dcscribcs a program which learns efficient strategies for solving problems such as Rubik’s cube and the eight puzzle. It uses a new gcncral problem solving method based on macro-operators. The strategies learned by the program are equal to or superior to strategies used by humans on these problems, in terms of number of moves rcquircd for solution. 1. Introduction: A limitation of GPS This. paper describes rcscarch aimed at extending the range of problems that can be solved by general problem solving methods. Currently, the most powerful such method is the combination of means-ends analysis and operator subgoaling described by Newell and Simon in 161, rcfcrrcd to hcrc as the GPS (gcncral problem solver) paradigm. GPS utilizes a set of diffcrcnccs or subgoals and a total ordering among them to arrive at a solution by achieving the subgoals trnc at a time. GPS can cffcctivcly solve a wide range of problems, many with a minimum number of moves. However, there exist problems that cannot be solved by the GPS formalism. ‘I’hc reason is that GPS rcquircs a set of subgoals that can bc solved sequentially such that once a subgoal is achicvcd, it never has to bc violated in order to complctc the solution of the problem. For some problems, such as Rubik’s cube, no such set of subgoals is known. Every known strategy for the cube involves at least temporarily violating previously established subgoals in order to achieve new subgoals. Note that if we select a set of subgoals of the form “dccrcasc the distance to the goal by one,” then thcsc subgoals can bc solved scqucntially. However, the only known way of computing the distance to the goal for an arbitrary state is exhaustive search. Hence, GPS is of little help in solving Rubik’s cube. The class of problems that arc outside the domain of CiPS is large and of considerable practical importance. For example, one subclass is lhls rcscarch was sponsored by tic Ikfcnsc Advanced Rcscarch Projects Agency (1)Ol)). AKI’A Order No. 3597. and momtorcd by the Air Ikrcc Avionics laboratory under Contract lC33615-Xl-R-1539. The views and conclusions in this document are lh(~ Of lhc aulhnr and should no1 hc mtcrprctcd as rcprcscnting the official politics, cllhcr cxprcs\ed or Implied. ol’ tic Ikfcnsc Advanced Kc\corch Projects Agency or fie U.S. C;ovcrnmcnt. the collection of NP-hard problems. For these problems, there are no known sets of subgoals that can be solved strictly sequentially. Furthermore, no efficient strategies are known for solving these problems. However, for some problems beyond the reach of GPS, such as Rubik’s cube. efficient solution strategies are known. Another example of such a problem is the well known eight-puzzle. It is interesting to note that while cfflcicnt strategies arc known for these problems, there are no cfflcicnt strategies for finding minimal-move solutions. This paper is concerned with this class of problems. The two questions to be addressed arc: 1. What is the structure of these efficient strategies? 2. How can these stratcgics bc learned or acquired? 2. MPS: Macro Problem Solver This section describes a problem solving program, called the Macro Problem Solver, that can solve problems such as Rubik’s cube and the eight-puzzle without doing any starch. For simplicity, WC will consider the eight-puzzle as an cxamplc. ‘I’hc problem solver starts with a simple set of ordered subgoals. In the cast of thc.cight-puzzle. each subgoal will bc of the form “move the N tilt to the correct position,” for N bctwccn 1 and 8, plus the blank “tilt”. ‘lhc operators to bc used are not the primitive operators of the problem space but scqucnces of primitive operators called nlacro-r,pern;ors or l~crus for short. Each macro has the property that it achicvcs one of the subgoals of the problem without disturbing any subgoals that have been previously achieved. Note that intcrmcdiatc states occurring within a macro may violate prior subgoals, but by the end of the macro all such subgoals will have been restored, and the next subgoal achieved as well. The macros are organized into a two dimensional table, called a Macro Tuble, which is analogous to the diffcrcnce table of GPS. A macro table for the eight-puzzle is shown in Table 1, while Figure 1 shows the corresponding goal state for the puzzle. A primitive move is reprcsentcd by the first letter of Right, Left, Up, or Down. Note that this is unambiguous since only one tile, other than the blank, can be moved in each direction. EZ.ach column contains the macros necessary to move one tile to the correct position without disturbing previously 164 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. positioned tiles. The headings of the columns give the solution order or sequence in which the tiles are to be positioned. Note that the first subgoal is to position the blank. The algorithm for selecting the rest of the solution order will be described below. The rows of the table correspond to the current position of the next tile to be positioned. 12 3 8 4 7 6 5 Figure 1: Goal slate for the eight-puzzle The algorithm employed by the macro problem solver works as follows: First the blank is located. its current position is used as a row index mto the first column, and the macro at that location is applied. This moves the blank to the center position. Next, the number 1 tile is located, its position is used as a row index into the second column, and the corresponding macro is applied. This moves the 1 tilt to its correct position and also lcavcs the blank in the centcr.Thc macros in the third column move the 2 tile into its correct position while leaving the blank and 1 tiles in their proper places and similarly for the 3, 8, 7, and 4 tiles. At this point, tiles 5 and 6 must be in their correct positions or the puzzle cannot be solved. The lower triangular form of the table is due to the fact that as riles are positioned, there are fewer positions that the remaining tiles can occupy. A similar macro table has been built for the Rubik’s cube, however space limitations prohibit its inclusion here. There are twenty individual movable “cubies” (1x1~1 cubes), divided into twelve edge (two-sided) and eight corner (three-sided) cubits. In addition, each edge cubic can be in one of two oncntations and each corner cubic can be in one of three possible orientations. Each subgoal is to place a B 1 0 2 3 8 7 4 2 u RDLU 3 RU DLURRDLU particular cubic in its correct position and orientalion. There are eighteen primitive moves corresponding to 90 degree clockwise, 90 degree countcrclockwisc, and 180 degree twists, for each of six faces. Inch column of the table contains the macros required to move a particular cubic to the correct position and orientation, from each possible position and orientation that the cubic could bc in, while leaving the cubies previously solved in their correct positions. The entire table contains 238 macros. The lengths of the macros range from one to sixteen primitive moves. The algorithm used to sclcct the solution order is the following: First pick a component which is affcctcd by the Icast number of primitive opcraLors. in other words, a corner rile or an cdgc cubic. Rcmovc those operators from the set of operators. At each step, pick the component which maximizes the number of primitive operators remaining which do not affect the previous goals. Tics arc rcsolvcd by choosing components adjacent to those already sclcctcd. For Rubik’s cube, this results in intcrmcdiaLc stages of solving an cdgc cubic, a 2x2x2 subcubc, a 2x2~3 rectangular box, two 3x3x1 planes. and finally the cntirc cube. Table 2 shows that the macro strategy for the tight-puzzle rcquircs about the same number of primitive moves on the a\.cragc as a human problem solver and that the Rubik’s cube macro strategy is more efficient than the stratcgics used by most people. STRATEGY EIGHT-PUZZLE RUBIK’S CUBE Optimal (brute force) 22 I?1 -la2 Macro Problem Solver 39 90 Average Human 38 PI 12tj3 Table 2: Average number of primitive moves in solution path generated by different strategies 8 L DRUL RULLDDRU RDLULDRRUL 7 LD RULDDRUL DRUULDRDLU RULDRDLULDRRUL RULD 4 R LDRURDLU LDRU RDLLURDRUL LURRDL URDLLURDRULD 5 RD LURDLDRURDLU LURDLDRU LDRULURDDLUR ULDRURDL URDLULDRRULD LURD 6 D URDLDRUL ULDDRU LDRUULDRDLUR URDL LDRRUULDRDLLUR ULDR Table 1: Macro table for the eight-puzzle 165 3. Learning Strategies This section addresses the issue of learning the strategies to be used by the macro problem solver. The problem is one of finding the macros to fill the macro table. Given an arbitrary scqucncc of primitive operators (a macro) and a solution order, WC define the invaricrnce of the macro as follows. The macro is applied to the goal state of the problem and then the number of components Mhich are in their goal position and orientation are counted, until a component is rcachcd in the solution order which is not in its goal state. For example, if the goal state of the eight-puzzle is rcprcscntcd by the vector [B 1 2 3 4 5 6 7 81, the solution order is (I3 1 2 3 8 7 4 5 6). and the state resulting from the application of some particular macro to the goal state is [I3 1 2 3 6 S 7 4 81, then the invarinncc of the macro is live, bccausc the first five tiles (including the blank) in the solution order arc in their goal positions and the sixth (the 7 tilt) is not. ‘I’hc invariance of a macro dctcrmincs its column in the macro table. The row of a macro is dctcrmincd by the position and orientation of the component that occupies the position immcdiatcly following the invariant components in the solution order. In the above example. the row of the m:lcro would bc the one labcllcd 4 bccausc the 4 tile occupies the sixth position in the solution order, or the 7 position in the puzzle. The simplest learning scheme is to perform a breadth-first starch of the problem space starting with the goal state, and for each macro gcncratcd. insert it into the macro table if the corresponding position is empty, until the table is filled. Note that a breadth-first starch ensures the shortest possible macro for each position in the table. This is the algorithm employed to gcncratc the eight-puzzle macro table. It was also used to produce a macro table for a 2x2x2 version of Rubik’s cube. Howcvcr, the combinatorics of the full 3x3x3 Rubik’s cube render this technique incffcctive, The technique used in this case is a type of bidirectional search. Consider two macros which map two corresponding cubits to the same position and orientation when applied to the goal state. The cffcct of the inverse of either macro, obtained by replacing each operator by its invcrsc operator and rcvcrsing the order of the operators, would be to map the cubic back to its original position and orientation. Hcncc, if the invcrsc of the second macro is appcndcd to the first macro, the result is a macro which leaves invariant the particular cubic in question. If rhc states resulting from two macros match in (at most) the first N cubits of the solution order, the composition of one with the invcrsc of the other is a macro with invariance N. Thus, by storing the macros that arc gcneratcd, and comparing each new macro to the stored ones. the macro table can be gcncratcd by searching to only half the depth of the longest macro required. 2 lhs cslimalc k based on the dcplh cxcccds the number ofpssiblc states in tic tree at which the number of nodes 3 Based on a random sarnplc of 10 graduate students Unfortunately. in addition to requiring a great deal of space, a bidirectional search rcquircs as much time as a unidirectional starch if each new state must be compared to each stored state. This is avoided by hashing each macro using the cubits in an initial subsequence of the solution order. If macros arc hashed according to the first N cubits in the solution order, then only macros with invariance grcatcr than or equal to N will bc found. Howcvcr in gcncral, as the invariance incrcascs, the length of the corresponding macros also incrcascs. Thus, in a breadth-first bidirectional search, the macros to fill the low invariance columns of the macro table will bc found fairly early, and subscqucnt effort can bc focused on macros with grcatcr invariance, allowing a more effective hashing function. The algorithm maintains an invariance threshold, which is the minimum invariance for which the corresponding column in the macro table is not yet completely filled. As the .invariance threshold increases, the entire table is rehashed using a hash function which takes advantage of the higher invariance. This algorithm is sufficient to find all macros up to length eleven for the Rubik’s cube, before the available memory is exhausted. This still leaves several slots empty in the macro table. These final macros are found by composing the macros with the greatest invariance. Note that the composition of two macros with invariance N necessarily results in another macro with invariance N or greater. There is some psychological plausibility to this technique in that many human cube solvers use compositions of shorter macros to accomplish the final stages in their solution strategies. The learning program for the Rubik’s cube is written in C and runs under Unix on a VAX-111780. The time required to completely fill the macro table is about 15 minutes, and the memory required is about 200K words. 4. Related Work The Strips [4] program was one of the first programs to learn and use macro-operators (MACROPS) in problem solving. However, the robot problem solving dotnain used by Strips and other programs is one that is amenable to a GPS treatment. Goldstein [3] wrote a program which automatically constructed triangular difference tables for GPS. The program worked on a variety of tasks, such as ‘I’owcrs of Hanoi and Instant Insanity, for which effective diffcrcnccs arc known. Furst ct al [5] have demonstrated an algorithm for lcaming strategies for permutation puzzles. Both the Rubik’s cube and the eight-puzzle can bc cmbeddcd in larger problem spaces which arc permutation groups. The structure of the macro table is from [5]. The macro composition tcchniquc described above is also from [S] and is the sole tcchniquc they use for learning macros. The running time of their algorithm is of order N6, where N is the number of individual cnmponcnts of the puzzle. In the cast of Rubik’s cube, N is 48, one for each individual movable square face of the cube. Unfortunately, the Furst algorithm is impractical for problems as large as the cube (4g6 * 12 billion) and would generate solutions that are inefficient in terms of number of primitive moves (about 250). However, the most significant diffcrencc between the approach exhibited here and that of [5] is that the running time of their algorithm is related to the size of the problem, whereas the running time of this method is related to the length of the longest macro needed to solve the problem. Bancrji [l] has made the observation that GPS fails to solve problems such as Rubik’s cube and the fifteen puzzle, and suggests the technique of using macros to bridge the gaps between points of recognizable progress. His work was independent of and occurred at about the same time as this research. 5. Further Work Thcrc arc several arcas that are currently being investigated further in this work. The first problem to be addrcsscd is the identification of heuristics to reduce the amount of computation required to learn the macros. One approach is to charactcrizc the amount of disorder in the puzzle state and to minimize this disorder in the starch for macros. The second problem is reducing the number of moves required for a solution. One approach to this problem is to allow dynamic flexibility in the solution order to allow utilization of the shortest macros at each stage. Another approach is to satisfy more than one subgoal simultaneously. Thirdly, the number of macros in the complete strategy can be reduced by scvcral means. One is to gcncratc subgoals which are preconditions for the effcctivc USC of macros from a smaller set. Another is to paramctcrize macros to rcducc the number of similar macros. ‘l’hc cffcct of diffcrcnt solution orders on thcsc three problems also rcquircs study. While the programs for the tight-puzzle and Rubik’s cube were written scparatcly, efforts arc underway to gcncralizc thcsc programs to produce a single learning program that can handle this class of problems. and to characterize the range of problems for which the tcchniquc is useful. Finally, any realistic model of problem solving in thcsc domams must admit a compound strategy composed of ordinary difference reduction plus the application of macros to jump between local maxima of the evaluation function. 6. Conclusions ‘I’herc are three conclusions that can bc drawn from this work. One is that GPS is not useful for solving problems such as Rubik’s cube. The second is that a generalization of GPS, called the macro problem solver, is capable of solving these problems detertninistically without search. Finally, the learning of the macros required by the macro problem solver can be effectively automated. The resulting strategies are comparable to or superior to typical human strategies in terms of number of primitive moves required for solution. Acknowledgments I would like to acknowlcdgc many helpful discussions concerning this rcscarch with Hcrbcrt Simon. Allen Newell, Mcrrick Furst, Ranan Banerji, and Glenn Iba. In addition, Dave McKeown, Kcmal Oflazer, Bruce Lucas, and Greg Korf read and provided helpful comments on drafts of this paper. References 1. Ranan 13. Ranerji. GPS and the psychology of the Rubik cubist: A study in reasoning about actions. In AriiJicid arzd Hurmrz Intelligence, A, Elithorn and Ii. ljancrji, Eds.. , 1982. 2. K. Anders Ericsson. Approaches to DescripliorJs and Annlysis of Pruhkn~ S’ol~ir~g ProccsJes: The B-puzzle. Ph.D. Th., University of Stockholm. March 1976. 3. Ernst, George W., and Michael M. Goldstein. “Mechanical discovery of classes of problem-solving strategies.” J. A.C.M. 29,l (January 1982), l-23. 4. Fikcs, Richard E., Peter E. Hart, and Nils J. Nilsson. “Learning and executing generalized robot plans.” AriIificial Inlelligence 3 (1972), 251-288. 5. Furst, Mcrrick, John Hopcroft, and Eugene Luks. Polynomial-time algorithms for permutation groups. 21st Annual Symposium on Foundations of Computer Scicncc. IEEE, Syracuse, New York, October, 1980, pp. 36-41. 6. Newell, A. and H. A. Simon. Humn Hall, Englcwood Cliffs, N.J., 1972. Problem Solving. Prentice- 7. Schofield, P. Complete solution of the tight puzzle. In Machine IrJtelligeJJce, N. I,. Collins and I). Michic, Eds., American Elscvier, New York, 1967. 167
1982
43
138
1HEURETTCS: ‘Il_iE~RE’T’lCAL AND EXPERPM ENTAL STUDY OF HEURISTIC RULES Douglas B. Lenat Heuristic Programming Project Stan ford University Builders of expert rule-based systems [Barr 811 [Feigenbaum 771 [Hayes-Roth et al. 8’21 attribute the impressive performance of their programs to the corpus of knowledge they embody: a large network of facts to provide breadth of scope, and a large array of informal judgmental rules (heuristics) which guide the system toward plausible paths to follow and away from implausible ones. Yet what is the nature of heuristics? What is the source of their power? How do thev interreIate; i.e., how can/should a large col-pus 6f heuristic rules be oq;anized? How do heuristics originate and evolve? “Heuretics” is the study of heuristics, with an eye toward answering questions such as those. Two case studies, the AM and E:URISKO programs, have lccl to some tentative Heuretics hypotheses, a dozen of which are presented in this paper. Our aitn is to stimulate lirture research in this field. Hypothesis 1: Heuretics is a bolta fide field of l~~~lcdgc, and merits investigation by AL We speak of Heuretics as a jZe/tl if knowfedgc because (as we sketchecl above) it has some more or less well agreed- upon objects of study, some motivation fbr studying such objects, and some central questions about the nature of such objects. 13ut to rate as a JielLi ~;f knowledge, as a science, Heuretics must also possess some accepted methods for investigating its many questions. We hypothesize the adequacy of the standard empirical inquiry paradigm which dominates Al research; i.e,, test hypotheses about heurislics by constructing -- and studying -- computer programs which use heuristics and which try to find new ones. My pot hesis 2: Heuristic rules have three primary uses: to prune away implausible ” moves” (actions, alternatives, etc.), to propose plilusible ones, arId to serve as data for the induction ol nc\~ heuristic rules. The first of these, using a heuristic to prune a search, is the one rnost heavily studied by earlier workers in the field t Michie, Nilsson, Gaschnig, etc. See, for example, Gaschnig 771 and the references he cites.). The second use, plausible move generation, is the source of power exploited by AM [Lenat 791. Tt has the character of lean? irlg by disco WY)). h the third and final case, the entities being learned are not new domain concepts, but new heuristics. Hypothesis 3: Heuristics can act as “plausible move generators”, to guide an explorer -- be he human or m:~chir~e -- toward valuable nejv concepts worthy of attentilon. This is one of the t.hree roles for a heuristic t-~-ul;~~as nyted.in Hyp. 2. Here we illustrate how that . <,onslcler, as an example, the heuristic Hl: Hl: if function f takes a pair of A’s as arguments, then define and study g(x) =df f(x,x). That is, given a binary fLmction f, it’s often worth taking the time and energy to see what happens when f’s arguments coincide. If f is multiplication, this new function turns out to be squaring: if f is acldition, g is doubling. If f is union or intersection, g is the identity function; if f is subtraction or exclusive-or, g is iden tically zero. Thus we see how two usef~11 concepts (squaring, doubling) and four ftmdamental conjectures might be discovered by a researcher employing this sirn ple heuristic. Application of Hl is not limitecl to mathematics bf course; one can think of Compile(x,x) [i.e., optimizing compilers written inefficiently in the language they compile,. and then processed by themselves ; Kill(x,x) [i.e., sutclde 1 ; Apply(x,x) Ponder(x,T) [i.e., self-awareness ; 1 afid even i.e., the activity we are now engaging in]. This hypothesis was su.ggested by work with AM, and has been confirmed In several domains by recent experiments with .EURISKO. When EURISKO was applied to the task of gcnefating interesting three-dimensional VLSI designs, it already possessed a heuristic that suggested augmentin.g any design, to make it more symmetric. When this was applied to the two-.climensional primitive device (the. gate: see Fig. l), it led to the very svmmetric device in gate oxide Figure 1. The standard MOS gate. Side view. The Channel tile .is intrinsic channel; i.e., it can serve between two positively (p-) doged regions or bctwecn two negatively (n-) doped regions. 159 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. Figure 2 below, a device which simultaneously computes NAND and OR, and which is the fLmdamenta1 building block of most of our latest 3-D VLSX chip designs. Symmetry is often more than merely aesthetic; here, it led to a device which tesselates (packs) three-space, and can be compactly stacked up into memory cells, PLAs, etc. James Gibbons, a pioneer in the techniques of builcling high-rise chi s by recrystallization of thin silicon films [Gibbons 80 , has f recently succeeded in fabricating these devices for us. ** NAND(A’B) I Metal tile tile P-doped Metal tile N-doped Figure 2 A symmetrized, three-dimensional extension of the gate from Figure 1. The central tilt is intrinsic channel, coated with gate oxide on both its top and bottom surfaces. HypothesEs 4: The same methodology enables a body of heuristic rules to monitor, modify, and enlarge itself. Each heuristic in EURISKO is represented as a full- fledged concept, asrf:‘;;ye with d!,zens yft;ts, each with a relatively succmt - This “parameterization” of he space of heuristics enables a corpus of heuristics to apply to itself. For instance, a heuristic that says “If a concept is using inordinate resources and achieving very little, then put it out of its misery (drop the ExpectedWorth to zero and stop using it)” can apply to mathematical functions and can also apply to heuristics. The heuristic “If a concept is sometimes useful but often not, then specialize it in many ways” actually applied to itself successfully, in one of the early runs of EUKlSKO, because it was sometimes useful but often not. One of EURISKO’s tasks is the design of naval ships, and entire fleets, conforming to a large set of rules and formulae (G DW’s “Trillion Credit Squadron” game). A simulator was easily built, and this enabled a kind of heuristically-guided evolution of fleets, with the simulator providing the needed natural selection function. As each battle was fought, specific designs were given credit and blame, and abstractions of these formed simple design heuristics (e.g., “Heavy armor is more important than agility”). Eventually, EURISKO generalized these into a very abstract heursitic: For the values of most parameters, when designing a TCS fleet, the best value will usually be a nearly -- but not quite -- extreme value. This was noticed at the level of designing an individual ship (nearly heaviest armor, nearly as many types of weapons as leoally allowed, nearly as small weapons as possible, ctc.7, and turned out to apply to the design of the fleet as a whole (nearly uniform, nearly minimal speed, etc.) The fleet designed by this process won the national Origins tourmament in TCS last July 4 (seven-round, single elimination). Partly as a result of the countec- intuitive loopholes exploited by that design, there were numerous rules changes in effect for the local tournament held in February, 1982. It is significant that EURISKO spent much less time developing a fleet (yes, it won again) for that set of rules, as most of its general heuristics were still valid, even though the particular designs it came up with were quite different (e.g., a ship with no offensive ability was a useful acljunct to its JLI~~, 1981 fleet; a ship with no defense was useful for the February, 1982 tournament; the old fleet had practically no ships with large weapons, whcras the new fleet had practically no ships without such weapons). EURISKO’s design task was made much easier by the use of the new heuristics it synthesized last summer. Hypothesis 5: Heuristics are compiled hindsight, and draw their power from the various kinds of regularity and continuity in the world. If an action A was (or would have been) useful in situation S, then it is likely that actions simiiar to A will be useful in the future in situations similar to S. I.e., if we could somehow actually compute the utility of obeying a heuristic, then that function -- APPROI’IiIA’TENESS(Actioil,SitLlation) -- would be continuous in both variables. One useful exercise is to consider the graph of APPROPRIATENESS values for a fixed action, varying over the situations in which it might be applied. For example, consider graphing the utility of the heuristic “If it’s April already and you haven’t gotten your taxes done, then consider going to a commercial tax-preparer”. The value of this advice varies as a function of rnany situation-dependent variables, such as your income (see Fig. 3). If’ you earned below $1.2k, it might be better to do it your-self; if you earned above $28k, it might be better to get an accountant. The language of graphs of functions is now at our disposal, an attractive metaphor within which to discuss such processes as specializing a heuristic, using multiple heuristics, and measuring attributes of a heuristic’s performance. For instance, we were led to ponder the significance of the slope of the curve as it intersects the x-axis, This reflects how crucial it is to determine true -- rather than approximate -- relevance of the heuristic. 1f the slope is steep, the effects of obeying the heuristic when it isn’t quite relevant could be 160 catastropluc. ff the slope is mild, spending a great amount of time determining true relevance is a waste. We cannot automatically construct the graph of a heuristic’s utility, but it is easy to infer the magnitude of its slope near the x-intercept from several cases of its accidental misuse. EURISKO presently uses this empirical technique to estimate this quantity for each heuristic. That value in turn guides the rule interpreter in deciding how much time to spend evaluating the IF- part of the rule before trying to apply the THEN-part. + ++ +*+ + +* * I * + + + INCOME -> _____ + ______________sk_____________________l~~______~_ . . .._____ + _____*,~~____ + ______T_“________3Ok___ * + * * + t + t * * + * -I- * * * + + t + * * + l t + + Figure 3. The utility of “...c, w to a commcricnl income-tax-preparer” (*) and “...go to John Smith at the Palo Alto AxaTax office” (-I-). As an example of specializil., 1~ a heuristic, consider what happens as we change the THEN-part of the above heuristic into I’... then go see John Smith at the Palo Alto AxaTax office”. His forte’ may be individuals whose income is around $20k, and he may be worse than his colleages fbr taxpayers earning about $12k, or about $28k. See the graph marked “-t-l’ in Fig. 3. If we specialize the IllEN-part of a heuristic, it typically has higher utility but only over a narrower clomain. Notice the area under the curve appears to be remain roughly constant; this is a geoniitric interpretation of -the tradeoff between generality and power of heuristic rules. Hypothesis 6: Generalizing and specializing a heuristic often leads to a p:~thologically extreme new one, but some of those arc useful nevertheless. By examining the graphs in Figure 3 above, one can generat.e a list of possible bugs that may occur when the actions (‘THEN- part) of a heuristic are specialized. First, the domain of relevance of the new one may be so narrow that it is merely a spike, a delta function. This is what happens when a general heuristic is replaced by a table of speci fit values. A more common bug occurs when one of the heuristics is completely dominated by the other. For example, “Smack a vu-graph projector if it makes noise” has much narrower dotnnin, but no higher utility, than the more general heuristic “Smack a device if it’s acting up”. Thus, the area under the curve is greatly diminished, but no benefit (narrow, high peak) accrues. While the last paragraph warned of some extreme bacl cases of specializing the THEN- part of a heuristic, there are some extreme good cases which frequently occur. The utility (y-) axis may have some absolute desirable point along it (e.g., some guarantee of correctness or efficiency), and by specializing the heuristic enough, its utility may exceed that threshhold (albeit over a narrow range of tasks). In such a case, the way we qualitatively value that heuristic may alter; e.$;, we may term it “algorithmic” or “real-time” or “O-tax . So some of the most useful constructs in computer science can be viewed as pathological cases of heuristics. Algorithms are seen to be heuristics which are so powerful that guarantees can be made about their use. Tables of values are seen to be heuristics whose domain is a set of measure zero. Hypothesis 7: The graph of “all the world’s heuristics” is surprisingly shallow. One can take a specific heuristic and generalize it gradually, in all posible ways, until all the generalizations collapse into weak methods. By carrying out this activity over and over again, for heuristics from various fields, we can imagine building up a graph that would approach -- in the limit -- “the graph of all heuristics.” This might be a useful technique, then, for investigating the space of heuristics, getting at its structure. A preliminary analysis (using AM’s 243 heuristics) led us to expect the tree to be of maximum depth about 50, though most of AM’s heuristics were turned into weak methods after only about a dozen generalization steps. Next, with the help of Woody Bledsoe and Herbert Simon, we analyzed this partial tree of AM’s heuristics, examining the power of the rules therein. It soon became apparent that most generalizations Hgenl were just as powerful than the heuristic rule(s) Hspec beneath them! In such cases, the specific rule(s) can be eliminated from the tree. The resulting tree had depth of only 4, and is thus incredibly shallow and bushy. We observed that all but the top couple and bottom couple levels of its tree could be eliminated with no ill effects. This elevates the top levels (the weak methods) to a special status, and likewise the bottom levels (the domain-specific rules). It may shed some light on the successes of two radically opposed philosophies of heuristic search: the cognitive science and knowledge engineering. It also dooms most attempts to create a new heuristic by specializing (moving downwards in that tree): the new heuristics synthesized that way are almost sure to be no more powerful than the ones you started with. This explains theoretically a finding we reached empirically using EURIS KO: generalization and analogy -- rzc)f specialization -- are the most powerfill ways to discover new heuristics. One solution to this “shallow-tree problem” is the realization that ihere are numerous ways in which one heuristic can relate to another: abstraction-of, applies- more-widely-than, possibly-triggers, easier-to-teach- than, etc. If there is avly such relation for which HN has some higher power than Hold, then it’s worth keeping Hnew around. E.g., special cases of Maxwell’s equations and Thorp’s Blackjack tens-count systems are 161 important, as they general versions. are easier to teach and use than the Hypothesis 8: Even though the world is often discontinuous, we usually cannot do-any better than rely upon heuristics that presun;ose;b;~nulty (see Hyp. 5). There are many ‘7 3 measures of APPROPRIATENESS (e.g., ef’ficiekcy, low down-side risk, comprehensibility), and many dimensions along which Situations can vary (e.g., difficulty, time, importance, subject matter). Compounding this is the nonlinearity of the Situation space along most of these dimensions. Thus the “zeroth order theory” espoused in Hyps. 5-7 is only a metaphor. Yet it is too attractive, too close to what human experts actually do, to reject out of hand. It can be extended into a “,first order theory“: It is frequently useful to behave 8s though the zero* order theory were true. That is, one acts as if the function APPROPRIATENESS(Action , Situation) existed, were computable, continuous, and time-invariant.. To give an ex‘ample: the current situation may appear similar to ones in which it was cost-effective to skip to the Conclusions section of the paper. Even though you can’t be sure that that’s an appropriate action to take now, it may be useful for you to behave as though the world is that continuous, to take that action anyway. If you do so, you’re following a heuristic. [To not tempt the reader to follow that heuristic, this paper has no Conclusions section.] That heuristic’s guidance is oniy as good as the generalization process you used in deciding the situation was similar (e.g., would you apply it to all articles? to all articles written by Lenat?) The world has of course changed in innumerable ways since the formation of your heuristics about paper-reading. You cannot monitor even a small fraction of the changes in the world; you cope (i.e., , solve the Frame l+$lci) by relying on extant heuristics Fd revising -- but usually only as -- they fall you. Hypothesis 9: The interrelations among a set of heuristics -- and the internal structure of a single one -- can and should be quite complex. Six years ago, the AM program was constructed as an experiment in learning by discovery. Its source of power was 243 heuristics, rules which guided it toward fruitful topics of investigation, toward profitable experiments to perform, toward plausible hypotheses and definitions. Its ultimate limitation apparently was due to its inability to discover new, powerful, domain-specific heuristics for the various new fields it uncovered. At that time, it seemed straight-forward to simply add “Heuristics” as one more field in which to let AM explore, observe, define, and develop. That task -- learning new heuristics by discovery -- turned out to be much more difficult than was realized initially, and we have just now achieved some successes at it. Along the way, it became clearer why AM had succeeded in the first place, and why it was so difficult to use the same paradigm to discover new heuristics. In essence, AM was an automatic programming system, whose primitive actions were modifications to pieces of Lisp code, predicates which represented the characteristic functions of various math concepts. See [Green et al., 741 for background on this style of code synthesis and modification. It was only because of the deep relationship between Lisp and Mathematics that these operations (loop unwinding, recursion elimination, composition, argument elimination, function substitution, etc.) which were basic Lisp mutators also turned out to yield a high “hit rate” of viable, useful new math concepts when applied to previously-known, useful math concepts. For instance, AM took a piece of Lisp code which determined whether or not two list structures were equal, lopped off one recursive call, and (see Fig. 4) wound up with a new Lisp predicate that returned True iff its arguments had the same length. But no such deep relationship existed between Lisp and Heuristics: when the basic automatic programming operators were applied to viable, usef’ul heuristics, they almost always produced useless (often worse than useless) new heuristic rules. PREDl: i4 (x,y) (COND ((EQ x Y) T) ((OR (NULL x) (NULL y ) NIL) (T (AND (PREDl (CAR x (CAR y)) 1 (PREDl (CDR x) (CDR y))))) PRED2: ii (x.y) (COND ((EQ x Y) T) ((OR (NULL x) (NULL y)) NIL) (T (PRED2 (CDR x) (CDR y)))) Figure 4. AM was given PREDl, the characteristic function for the math concept “list-equality”. Ry lopping off one recursive call. it created PRED2, which computes “lists having the same length”. Over the past six years, we have constructed a new language in which the statement of heuristics is more natural and compact. The vocabulary includes many types of conditions (If-there-are-enough-resources, If- we-recently-worked-on), actions (Then-conjecture, Then-add-to-Ggenda), and non-executable attributes that record descriptive information about the heuristic (Average-running-time, Origin). Instead of writing two large lumps of Lisp code to represent the heuristic (If and Then), one spreads the same information out across many dozens of “slots”. A heuristic in EURISKO is now -- like a math concept always was in AM -- a collection of about twenty or more slots, each filled with a line or two worth of code. By employing this new language, the old property that AM satisfied fortuitom’y is once again satisfied: the primitive syntactic operators usually now ‘produce meaningfLl1 semantic variants of what they operate on. The ties to the foundations of Heuretics have been engineered into the synt‘ax and vocabulary of the new language, partly by design and partly by evolution, much as McCarthy engineered ties to the foundations of Mathematics into Lisp. The EURISKO program employs this language to represent hundreds of heuristics. It explores eight task domains: design of naval fleets, elementary set theory and number theory, Lisp programming, biological evolution, games in general, the design of three- dimensional VLSI devices, the discovery of heuristics 162 which help the system discover heuristics, and the discovery of appropriate new types of “slots” in each domain. From 200 to 500 concepts from each domain (objects, operators, etc.) were also supplied initially. In each domain, many new concepts and designs, and a few heuristics, were indeed discovered mechanically. Hypothesis 10: Some very general heuristics can guide the search for new domain-dependent heuristics. Even though two domains may appear disparate, the same heuristics may be equally powerful in coping with them. This one, e.g., is useful in almost all fields: IF you're about to use or prove something about C, THEN first make certain that C has some examples, i.e., ensure that C is not vacuous. Even if the heuristics for the two domains seem disparate, the paths which were followed in getting the powerful heuristics of the field may be similar. Hypothesis 11: As with any field of human endeavor, Heuretics is accumulating a corpus of informal judgmental knowledge -- heuristics. In this case, the heuristics are about Heuretics; they guide the heuretician in extracting heuristics from experts, in deciding when the existing corpus of heuristics needs to be augmented, in representing heuristics within knowledge bases, in evaluating the worth of a heuristic, in troubleshootmg a program built around a large collection of heuristic rules, etc. Some examples are: The best way to extract a new heuristic is to have the expert watch the existing program err, and ask him/her to help track down the "bug". When generalizing a heuristic, don't replace the central "AND" connective of the If-Potentially- Relevant slot by "OR"; the result is indeed more general, but likely to be very over-generalized. New heuristics shoul new domain concepts d emerge much less often than If a heuristic takes up a lot of room, there's probably a useful new slot or two that can be defined, to shorten that (and other) heuristics. Hypothesis 12: Domains which are unexplored, internally formalizable, combinatorially immense, and highly structured are ideal for studying Heuretics. EURISKO found many new concepts and heuristics in the TCS game and the 3D VLSI design task because any discoveries in those domains were almost certain to be new: the domains were practically unexplored by human beings. Breaking new ground is often easier than scouring old ground for neglected gems. What other criteria make a task well-suited to automated discovery. 3 A second one is that there must be a way to simulate or directly carry out experiments. Third, the “search space” should be too immense for other methods to work. No human should be able to manually search the same space the program is walking around in. Fourth, the task domain must be rich in structure, including heuristic structure. There should be many objects and operators, many kinds of objects and kinds of operators. Hopefully they will be related hierarchically and in other ways. Complexity of the domain raises the utility of plausible, inexact reasoning, hence the need for heuristic guidance. By “heuristic structure” we mean the presence of many heuristics, and implicitly the absence of any known efficient algorithms to replace them. For instance, theorem proving in propositional calculus is a poor task domain for Heuretics research, as it admits only a few heuristics, and they are already well known. Finally, one of the most crucial requirements is that of an adequate representation. If the language or representation employed is not well matched to the domain objects and operators, the heuristics that do exist will be long and awkwardly stated, and the discovery of new ones in that representation may be nearly impossible. An example of this was the painful development of EURISKO, which began with a small vocabulary of slots for describing heuristics (If, Then), <and was forced (in order to obtain reasonable performance) to evolve two orders of magnitude more kinds of slots that heuristics could have, some of them domain-dependent.. The purpose of this paper has not been to convincingly argue each point. Rather, we hope to stimulate research in a new area -- Heuretics -- and to that end have indicated a spectrum of questions, apparent regularities, and issues which are worth investigating further. Acknowledgments Productive discussions with John Secly Brown, Bruce Buchanan, Bill Clancey, Ed Fcigenbaum, Johan deKlcer, George Polya, Herb Simon, and Mike Williams have heavily influenced this work. EURISKO is written in -- and relies upon -- RT,L. The 3D VLSI work is in collaboration with Bert Sutherland of SSA and Jim Gibbons of Stanford. Finally, I wish to thank XISOX PARC'S CIS and Stanford University’s HPP for providing superb environments (intellectual, physical, and computational) in which to work. Financial support has been provided by ONR (N00014-80-C- 0609) and XEROX PARC. References Barr, Avron, and Edward A. Fcigenbaum, eds., I~mdbook of AI, Volume II, William Kaufman, Los Altos, 1981. Fei enbaum, s Edward A., “The Art of Artificial Intelligence”, rot. Fifth ZJCAI, Cambridge, Mass., August, 1977, p. 1014. Gaschnig, John, “Exactly How Good Are Heuristics?: Toward a Realistic Predictive Theory of Best-First Search”, Proc. fi’ifih IJCAZ, Cambridge, Mass., August, 1977. Gibbons, James, and K. F. IXC, “One-Gate-Wide CMOS Invcrter on Laser-Recrystallized Polysilicon”, I&!%’ Electron Device Letters, EDL-1, 6, June, 1980. Green, Cordell, Richard Waldinger, David Barstow, Robert Elschlager, Douglas Lenat, Brian McCune, David Shaw, and Louis Steinberg, Pro-ogress Report on Program Understandirzg Systems, STAN-CS-74-444, A 1 Lab, Stanford, August, 1974. Hayes-Roth, Frederick, Donald Waterman, and Douglas I.cnat (eds.), Buildirlg Expel-1 Systems, proceedings of the 1980 San Diego workshop in expert systems, to appear 1982. Lenat, Douglas B., “On Automated Scientific Theory Formation: A Case Study Using the AM Program,” in (1Jaycs et al, cds.) Machine Intell. 9, NY: &lstead Press, 1979, pp. 251-283. Lenat, Douglas B., “The Nature of Heuristics”, J. ArtijEaf Intelligence, to appear Fall, 1982. Polya, G., How lo Solve It, Princeton University Press, 1945.
1982
44
139
Job-Shop Scheduling: An Investigation in Constraint-Directed Reasoning Mark S. Fox, Brad Allen, Gary Strohm Intelligent Systems Laboratory The Robotics Institute Carnegie-Mellon University Pittsburgh, Pennsylvania 15213 1. Introduction I___-- -- This paper describes ISIS-II*, a constraint- directed reasoning system for the scheduling of factory job-shops. ISIS-II takes a heuristic search approach to generating schedules. The key features of ISIS-II's approach is that it can re- present and use a variety of different types of constraints to guide the search, and is able to selectively relax conflicting constraints. The plant under consideration** represents one of the most complex of scheduling tasks. The plant produces thousands of different parts,some of which are similar, some of which are not. Any part can be ordered in quantities from one to hundreds. Each part number has one or more process routings containing at least ten opera- tions. A process routing may differ simply in machine substitutability, or may represent a tot- ally different manufacturing process. Each oper- ation in a process routing requires resources such as machines, tools, operators, fixtures, materi- als, etc. At any time there are over 200 orders in the plant, each competing for the same re- sources. This scheduling problem has been described as NP-hard. The simple sequencing (without gaps or alternative routings) of 10 orders on 5 machines can result in (lo!)5 possible schedules. Rather than do simple capacity analysis, as found in the majority of vendor scheduling systems, or use a local dispatch rule approach as found in opera- tions management research, a constraint-directed reasoning approach was chosen. It was found that schedulers spend 80%-90% of their time determining the constraints in the environment that will affect their scheduling decision, and lo%-20% of their time actually constructing and modifying schedules. Any system that was to adequately schedule such an environment must attend to the multitude and variety of constraints. *Intelligent Scheduling and Information System, Version 2. This research was supported, in part, by the Westinghouse Corporation, and the affili- ates program of the Robotics Institute. **The Westinghouse Turbine Component Plant in Winston-Salem NC. The rest of this paper describes how ISIS-II represents, searches with, and relaxes constraints in the process of scheduling a job-shop. 2. Constraint Identification -- The first step in the construction of ISIS-II was to determine the categories of constraints a scheduler considers. Fol!r categories were distin- guished. The first are organizational goals. They include job tardiness, work in process, resource levels, cost, production levels, and shop stabil- ity. One can view these constraints as being ap- proximations of a simple profit constraint. The goal of the organization is to maximize profits. * Scheduling decisionsare then made an the basis of current and future costs incurred. For example, not meeting a due date may result in the loss of a customer and,in turn,further profits. The long- er the work in process time, the greater the carry- ing charge for raw materials and value-added oper- ations. Maintaining a designated production level may amortize the cost of the capital equipment in a uniform manner. In practice, most of these costs cannot be accurately determined, but must be approximated. These approximations have resuled in the incorporation of the above constraints in theplant’s operating goals. Physical constraints are a second category. They specify what an object can or cannot be used for. For example, a milling machine may be limit- ed in the size of turbine blade it can work on due to the length of its workbed. On the other hand, a drill may have a function or a graph that de- fines how long the drill can be run at a particu- lar speed in a particular material. Gating constraints are a third category. They define the conditions to be satisfied before an object can be used or a process begun. Examples of gating constraints are operation precedence and resource requirements. Preference constraints are a fourth category. They provide the means by which preferences can be expressed. Machine preferences are one example, operation and queue position preferences are others. A preference can be viewed as a abstrac- tion of other types of constraints. Consider a preference for a machine. It expresses a floor supervisor's desire that one machine be used in- stead of another. The reason for the preference may be due to cost or quality, but the supervisor does not have an actual cost or quality constraint 155 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. to use due to lack of data. 3. Constraint Representatioa -- - A constraint may have one of two effects on a schedule. It may determine the admissability of a schedule, or it may determine the acceptability of a schedule. Admissability determines the legality of a schedule against constraints that cannot be relaxed. Acceptability rates a schedule, allowing alternatives to be distinguished. In a domain with many constraints of a possibly conflicting nature, a constraint must specify three things: what it is constraining, what the alternatives of the con- straint are if it cannot be satisfied, and how well the object of the constraint satisfies them. The general constraint schema contains three slots (figure 3-l). The PRECONDITION defines the applicability of a constraint. The evaluation- function, when evaluated, provides a rating of the decision. The WEIGHT denotes the relative import- ance of the constraint. {{ constraint PRECONDITION: EVALUATION-FUNCTION: WEIGHT: }) Figure 3-1: constraint Schema How well a decision satisfies a constraint is represented by the rating. A constraint may return a rating in the interval (0,2). 0 denotes rejec- tion, 1 denotes indifference, and 2 denotes maximal support. More than one type of constraint relaxation is distinguished. A constraint can be Binary, or offer two or more Choices. Binary constraints re- present either a Preference or a Requirement. The former rates (acceptability), the latter prunes (admissability). An attribute-restriction schema is a Requirement. It defines a test on an attri- bute. An example is the length constraint test for a machine (figure 3-2). It defines the blade is to have a foil-length of less than 28.5 (inches). {( length-constraint { INSTANCE attribute- restriction OBJECT: blade ATTRIBUTE: foil-length ATTRIBUTE-VALUE: 28.5 F’REDICATE: l-w 1 II Figure 3-2: length-constraint Schema A choice-constraint specifies a constraint, its relaxations and their utilities. Choices can be discrete or continuous. {{ discrete-constraint { IS-A choice-constraint ALTERNATI’IIE: Restricrion: (TYPE INSTANCE choice-constraint) TYPE: Restriction: (OR exclusive inclusive) Default: exclusive } }} Figure 3-3: discrete-constraint Schema A discrete-constraint contains an ALTERNATIVE slot which specifies alternative discrete values and their utilities. The TYPE slot defines whether the alternatives are to be exclusive or inclusive. An example of a discrete-constraint is the specifi- cation of the number of shifts associated with a particular machine. A shift is represented by a shift schema (figure 3-4). It is a discrete-con- straint with the value elaborated into START-TIME, END-TIME, and DAY. {{ shift { IS-A discrete-constraint (ELABORATE VALUE --> START-TIME END-TIME DAY) } MACHINE: }} Figure 3-4: shift Schema An example of a shift is that specific for a wmfl machine (figure 3-5). {{ wmf 1 -shift { INSTANCE shift MACHINE: wmfl START-TIME: 8:00 END-TIME: 16:OO DAY: (OR monday tuesday Wednesday thursday friday) UTILITY: 2 ALTERNATIVE: {{ INSTANCE shift START-TIME: 16:OO END-TIME: 24:00 DAY: (OR monday tuesday Wednesday thursday friday) 1 ?I UTILITY: TYPE: irkksive }} Figure 3-5: wmf 1 -shift Schema An example of a continuous constraint is an order due-date. It specifies the utility of meet- ing the due date, and the utility of all possible early and late ship dates. 156 4. Constraint Relaxation The representation bf constraint provides in- formation on how to relax a constraint, and the utility of the relaxation. The decision as to when to relax a constraint is made in two ways. {{ mfg-order-xxxx { IS-A manufacturing-order PRIORITY-CLASS: forced-outage Generative Relaxation. Constraints are relaxed in a generative fashion during heuristic search. The search operators generate states which represent alternative relaxations of one or more constraints. Analytic Relaxation. --_3 A rule-based system analyzes an order before it is to be scheduled to determine the relative importance of constraints, and in turn which should be relaxed. Another set of rules perform a post-search analysis to deter- mine whether the schedule is reasonable, and if not, what other constraints should be strengthened or relaxed. PRIORITY: 7 STYLE: # 7 J8924 ROWS: 12 DUE-DATE: {{ INTANCE due-date-constraint DATE: (450 0 0) }} SEARCH-OPERATOR: choose-operation choose-machine choose-queue-posjtion STATUS: posted SCHEDULING-DIRECTION: backward INITIAL-SEARCH-STATE: {{ INSTANCE search-state OPERATION: 980 }} > 11 Figure 5- 1: Posted Manufacturing Order 5. Constraint-Directed Search ISIS-II constructs a schedule for an order by The search begins at the specified initial states. performing a beam search (Lowerre & Reddy, 1976) in the space of partial schedules. It first per- forms a pre-search analysis to generate the bound- aries of the search space. Followed by a con- straint-directed search. And lastly it performs a post search analysis to determine whether the search was effective. 5-l. Defining the Search Space -_ The first step taken in scheduling an order is to define the problem. That is, to define: The specified operators extend these states. After each application of an operator, the generated states are rated by applying the constraints, and only the best "n" states are kept for the next iteration of operator applica- tion. Each state in the path defines one more operation, machine, and queue binding for the order. A complete schedule is defined by the path from the initial state to the end state in search space. 5-3. Constraint Resolution The--key-zofthe search for a schedule -the constraints that bound the search space, which in turn define the search operators, -any new constraints that do not already exist, -the constraint classes to be ignored, and is the application of constraints in rating a search state (partial schedule). The rating of a state can be divided into two parts: Resolving what constraints should be applied to the state, and applying the constraints to the state. -a prioritization of the classes. remaining constraints A rule-based approach is used to examine an order to determine the above. It used information such as order priority, system goals, and previous scheduling history. An example is the order mfg-order-xxxx (figure 5-l). The order is a "forced outage" which implies that the due-date is a major con- straint. The system creates due-date and work in process constraints. It also bounds the search space by only considering alternative operations, machines, and queue positions. 5-2. Searching the Search Space Before performing the search, a second set of rules examine the order to determine the dir- ection of search (forward from start date, back- ward from due date, or at some point in a partial schedule), and the initial states in the search. These rules specified the SCHEDULING-DIRECTION and the initial-search-state in figure 5-l. The direction was chosen based on the order's PRIORITY-CLASS. 5-3-l. J80cal Resolution As the search proceeds, states are gener- ated which vary widely in their choice of opera- tions, machines, and queue positions. Not all constraints in the system may be relevant in rating the state (partial schedule) in question. The applicable consl;&nts are dynamically deter- mined, and may originate from four sources: their placement in the plant model, their hierarchical imposition by other systems such as capacity analysis (e.g., removing a routing due to bottle- necks), their lateral imposition early on in the search (e.g., choosing an operation early in the routing may disqualify a later operation), and their exogenous imposition by the user. After the local constraint set is resolved, ISIS-II filters the set by evaluating each constraint's precondition. The precondition is the final context-sensitive test of a constraints applicabi- lity. Only constraints with a true $:econdition form the final local constraint set. 5-3-2. Global Resolution ~-x __~ Unlike some simple game tree searches, the path which leads to a search state, is as import- ant as the state itself. Local resolution, as defined above, resolves what constraints affect only the state under consideration. But the 157 rating of a state is a rating of the partial sche- wards from its due date. If a feasible schedule is dule up to the current state, and not the single not found, i.e., it runs out of time before it runs choice represented by the state. Hence, the rating out of operations, it then resets the orders sched- of a state must include not only the local con- uling parameters to schedule forward from the "to- straints but the constraints applied to all the day". If the results of this scheduling attempt states along the partial schedule ending at the are poorly rated, then it marks the order as having current state. failed again, and posts it for re-scheduling. This time, the pre-analysis rules add another operator Constraints are classified into two categor- which searches the shift dimension for a machine. ies: invariant (e.g., operation preference, queue ordering) and transient (e.g., a due-date or work 6. Conclusions in process estimator). When ISIS-II rates a state ISIS-II represents an approach to doing con- it collects all the invariant constraints along straint-directed reasoning in a very large solution the path from the initial state to the current, space. It differs from other constraint-based and includes them in the rating. ISIS-II also reasoning systems (e.g., Fikes, 1970; Goldstein, gathers up all the transient constraints, but does 1977; Sussman 6 Steele, 1980; Stefik, 1981; Waltz, not retain duplications. Only the latest instan- 1975; Zucher, 1976), in the variety of constraints tiation of a transient constraint (closest to the it can represent, and its focus on constraint re- current state) is saved. Transient constraints are laxation as an integral part of the knowledge re- estimators of the rating of a partial schedule. presentation and search process. Each application of a transient constraint updates its previous application. The union of invariant ISIS-II is continually being tested on large and transient constraints from the constraint set amounts of data, resulting in the alteration of for the state under consideration. existing constraints, and the addition of new ones. Its approach has been rated highly by the expert 5-3-3. Relative Resolution. schedulers, and is scheduled to be placed in the After the constraint set is resolved, each factory by September of 1982. constraint is weighted. The relative importance of a constraint is defined by a scheduling-goal. A 7. References scheduling-goal partitions the constraints, assign- ing a weight to the distributed amongst a parti- Fikes R.E., (1970), "REF-ARF: A System for Solving tion's members. For example, an order of priority- Problems Stated as Procedures", Artifi- class "forced outage" would place greater weight on cial Intelligence, Vol. 1, ~~27-120. the satisfaction of time constraints such as due- date, and less on constraints such as queue prefer- Goldstein I.P., and R.B. Robert, (1977), "NUDGE: A ences, queue stability, etc. Knowledge-Based Scheduling Program", MIT AI Memo 405. 5-4. Constraint Application ---- --_ 0nce the constraint set has been resolved and Lowerre B., (1976), "The HARPY Speech Recognition weighted, ISIS-II derives a rating for the state by System", (Ph.D. Thesis), Tech. Rep., computing the weighted average of each constraint's Computer Science Dept., Carnegie-Mellon rating of the state. The state stores each con- University, Pittsburgh PA. strain& weight and rating at the state for later use in explaining a schedules search path. Stefik M., (1981), "Planning with Constraints (M~LGEN: Part l)", Artificial Intelli- Once all the current states have been rated, gence, Vol. 16, pp. 111-140. all but the top "n" states are thrown away and the search is repeated. Sussman G.J., and G. L. Steele Jr., (1980), "CON- STRAINTS-A Language for Expressing 5-5. Post-Search Analysis As-the search proceeds, ISIS-II continually Almost-Hierarchiacal Descriptions", Artificial Intelligence, Vol. 14, ppl- tests to see whether the goal state has been gen- erated, and/or the search has died without finding a solution. If either of these has occured, post- search analysis is entered. Post-search analysis is accomplished by a set of rules that analyse the results in order to determine whether: 39. Waltz D., (1975), "Understanding Line Drawings of Scenes with Shadows, in P.H. Winston (Ed.), The Psychology of Computer Vi- sion, New York N.Y.: McGraw-Hill. -the system found a satisfactory solution. Zucker S.W., (1976), Relaxation Labelling and the Reduction of Local Ambiguitities, In -the system should continue searching if the solu- tion is unsatisfactory. Pattern Recognition and-Artificial Intelligence, C. H. Chen (Ed.), New York: Academic Press. -another search strategy should be pursued. Consider the example of mfg-order-xxxx. It is of priority-class "forced outage". A forced outage blade must be shipped by its due date, if not soon- er. ISIS-II first attempts to schedule it back-
1982
45
140
KNOWLEDGE INTEGRATION IN TEXT RECOGNITION Sargur N. Srihari and Jonathan J. Hull Department of Computer Science State University of New York at Buffalo Amherst, New York 14226 ABSTRACT The paper describes an algorithm based on AI techniques for recognizing words of printed or hand-written text--with the technique developed also applicable to cor- recting substitution spelling errors. The algorithm effectively integrates bottom-up information in the form of letter shapes, letter transitional probabilities and letter classification-error probabilities together with top-down knowledge in the form of a lexicon of legal words repre- sented as a letter trie. Experimental re- sults with the algorithm are reported for the combined top-down and bottom-up ap- proach and for each of the two approaches individually. I INTRODUCTION The recognition of text that is ma- chine printed with multiple fonts, hand- printed, or written as cursive script finds many applications including that of office automation. Some present genera- tion optical character readers [OCRs] accept single font machine print or text hand-printed under certain constraints. Any deviation from these constraints re- sults in rejection or a highly garbled representation of the text. Human beings perform better than present OCRs by at least an order of magnitude in error rate although their performance when viewing a letter in isolation does not significantly differ from OCR error rate. This is attributed to effective use of con- textual factors like letter sequences, vocabulary, word-dependency, sentence- structure and phraseology, style and sub- ject matter as well as the associated pro- cesses of comprehension, inference, asso- ciation, guessing, prediction and imagina- tion, all of which take place very na- turally during the process of reading. The example of Fig. 1 illustrates some as- pects of this process. Although the let- ters 'H' and 'A' in the words 'THE' and 'PAPER' are identically printed--thereby leading to identical feature vectors--they are easily distinguished by the human reader due to the presence of surrounding letters in the respective words. The last word of the sentence is either 'CLIP' or 'CUP' which can be disambiguated by more global knowledge, e.g., if the next sen- tence were 'I NEED SOME COFFEE' then the word in doubt is probably 'CUP'. It is clear that if computer programs are to reach expert human ability in text recognition then they need to be able to effectively integrate diverse contextual knowledge sources about the text, as well as knowledge about the kinds of textual errors that are likely, i.e., characteris- tics of the text transmission channel that introduces errors. A number of programs that utilize only a few knowledge sources in text recognition are described in the literature; tutorial surveys of these methods have been made [1],[2]. Some of these methods, viz., text recognition al- gorithms, are directly applicable to a set of image vectors representing characters of text and others, viz., text error cor- rection algorithms, are applicable only to previously decoded text. A majority of these methods can also be characterized as those that are data-driven or bottom-up, and those that are concept-driven or top- down. Data-driven algorithms proceed by re- fining successive hypotheses about an in- put string. An example is a program that utilizes a statistical (Markovian) repre- sentation of contextual knowledge in the form of a table of transitional probabili- NAND ME TftE PtWER CUP Fig. 1. Identical patterns have different interpretations in different contexts. --w---m-- *This work was supported by the National Science Foundation Grant IST-80-10830. 148 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. ties, i.e., the probability of each letter given that a letter sequence has pre- viously occurred. Concept-driven algo- rithms proceed with an expectation of what the input string is likely to be and pro- ceed to fit the data to this expectation. Examples are algorithms that use an impli- cit or explicit representation of a lexi- con. This paper describes an algorithm that effectively merges a bottom-up refine- ment process that is based on the utiliza- tion of transitional probabilities and letter confusion probabilities, known as the Viterbi Algorithm [VA], together with a top-down process based on searching a lexicon that is applicable to text con- taining an arbitrary number of character substitution errors such as that produced by OCR machines. The work is part of a larger ongoing effort on the text recogni- tion problem at SUNY/Buffalo. II THE BOTTOM-UP APPROACH The VA is a method of finding the word that maximizes likelihood over all possible letter combinations and not neces- sarily those in a lexicon; it is based on a dynamic programming formulation which leads to a recursive algorithm [3]. The method utilizes the characteristics of the OCR channel in the form of a table of con- fusion probabilities. Each entry of this tablerepresents the probability that the OCR channel assigns a given letter to an- other (possibly the same) letter due to ambiguities in the shape features used to classify shapes into character classes. The algorithm can be viewed as that of finding a maximum cost path through a directed graph called a trellis. The log- transitional probabilities are associated with the edges of the trellis and the log- confusion probabilities are associated with the nodes. The cost of a path is then the sum of all the edge and node values in the path. We us'e a computa- tionally improved version of the VA where the number of alternatives per letter is variable --these alternatives are determined by the letters that have the highest con- fusion probability. This method represents a purely bot- tom-up approach whose performance may be unacceptable due to the fact that the re- sulting strings do not necessarily belong to a lexicon. Our approach to improve performance is to use top-down contextual information, in the form of a lexicon of allowable input words, to aid the bottom- up performance of the VA. III LEXICAL REPRESENTATION The lexical data structure and method of access is critical to the efficiency of any text correction algorithm. Several alternative structures are available--the choice has to be based on the search stra- tegy of the algorithm and the memory available. A data structure that is suitable for determining whether a given string is an initial substring, or prefix, of a lexical entry is known as the -41. Since the VA proceeds by computingor a given length the most likely prefix, the trie is an attractive data structure. Essentially, the trie considers words as ordered lists of characters, elements of which are rep- resented as nodes in a binary tree. Each node has five fields: a token, CHAR: a word-length indicator array of bits, WL; and end of word tag bit, E; and two pointers labelled NEXT and ALTERNATE (see Fig. 2). I 4 . , . 9 B 00111 0 A 00111 0 D 00111 1 E 00011 1 1 y-4 - -3 l -+ f . I I r F 00100 0 r A 00100 0' 3 00100 1 . c . c- . L (4 (b) Fig. 2. Trie structure: (a) the fields of a typical record, and (b) trie of the lexicon: A, AN, AND, ANN, ANNOY, BAD, BADE BADGE, DAY, DID, FAD, FAN, FAR. 149 A node is a NEXT descendent if its token follows the token of its father in the initial substring of a lexical word. It is an ALTERNATE descendent if its token is an alternative for the father's given the initial substring indicated by the most immediate ancestor which is a NEXT descen- dent. Without loss of generality it is required that the lexical value of the to- ken of each ALTERNATE descendent be greater than that of its father. The end of word bit is set if its token and the initial substring given to reach the token comprise a complete dictionary word. The mth bit of the word length indicator array is set if the token is on the path of an m letter word in the trie. IV THE COMBINED APPROACH Simultaneous search of the VA trellis using a variable number of alternatives per input letter and the trie structure is controlled by a binary array A. This may be regarded as a blackboard through which the top-down and bottom-up processes com- municate [5]. Element A[j,i] is set to 1 if the jth letter of the alphabet is a possible correction for the ith letter of the input word, i.e., log-confusion proba- bility exceeds a threshold t, and 0 other- wise. Thus the paths of the trellis that need to be evaluated are only those that begin at the l's of the first column of A and proceed through the l's of the sub- sequent columns. Before evaluating a path that proceeds from one column of A to the next column, that path is determined to be legal with respect to the trie. The com- putational complexity of the resulting algorithm is of the same order as the VA. V EXPERIMENTAL RESULTS To determine the performance and ef- ficiency of the algorithm with actual text and to compare this with variations of the VA, a data base was established and ex- periments were conducted. English text in the Computer Science domain 'iChapter 9 of Artificial Intel- licrence, P.H. Winston, Addison Wesley, 1977) containing 6372.words was entered onto a disk file. Unigram and first order transitional probabilities were estimated from this source. A model reflecting noise in a communications channel was used to introduce substitution errors into a copy of this text and confusion probabili- ties were estimated from this source. A lexicon of 1724 words containing 12231 distinct letters was extracted from the correct text and a trie was constructed. There were 6197 nodes in the trie and the average number of alternates for all nodes was 1.62. The storage required to load the program and its knowledge data structures were, in terms of CDC Cyber 174 words: program (lOK), trie (18K), confusion and transitional probability tables (1.5K). An example of garbled text is given in Fig. 3 and its correction produced with t=-11) is given in Fig. 4. It can be observed that the corrected text is significantly better than the text input to it. We also note the shortcomings that the words "lomputer" and "tayfr" were re- jected and the garbled words "bm" and "beg" were erroneously corrected to "by" and "but" (instead of "be" and "few) respectively, and the lexical word "come" was not corrected to "some". Rejections could be eliminated by decreasing the al- ternative selection threshold t, thereby allowing more possibilities for each letter. If we lcoi at what has prodused lcmputer imteliigence qo far, we see multiple lamers, each of which rests on primitives of ci.e naxd tayfr dowm, forminc a hierarcfical structure with a great deal interposed between the intelligent prphvem and the transistors which ultimatelu suppodt it. Figure 9-8 illustratss. All Of the cgmplexitu of one kevel is summarizfd abd distilled down to a bes simple asomic notions which axe the primitives oe the next lamer up. But with so much insulatiop, it ccnnot possmbly be that the detailfd nature of the lgwer levels can matter to what happens afoxe. This argues egainqt dhe idea that studning neurons cap lead to muah of an understanding about intelligence. Understandinw them beautifullu and entirelu cbn no more pvoduse an uncerstanding of intelligende than a complete undetstanding of transistors can uyeld insight into how a computer can understand scenes or reqpknds to English. Teople cannot think ib we pluci the neurons out of their brains but if we studu only neurons, we have onlm a slender chance of getting at intellkgence. Still, come critics argum that aomputerc cannkt bm intelligenx becavse digital hardware made of silicom can never do what braips made of neurons do. Their pocition is weakened bu the hierarchu argument and the lack of solid knowledge about what the ufthynkablm tanglad neuropil does. Fig. 3. Garbled text input to algorithm. To show the effects of differing levels of contextual information on per- formance at the optimum parameter setting of t=-11, i.e., where little additional performance improvement is observed by increasing the number of alternatives for each letter, the algorithm was run using only top-down information by setting all transitional probabilities equal and the algorithm was again run without the trie, thus using only the bottom-up information provided by the transitional probabilities. 150 The correction rates were 82% and 3556, respectively, both less than the 87% pro- vided b detaile B the combination approach. A-more discussion of experimental results is given in [6]. If we look at what has produced -------- intelligence so far, we see multiple layers, each of which rests on primitives of the next ----- down, forming a hierarchical structure with a great deal interposed between the intelligent program and the transistors which ultimately support it. Figure 9-8 illustrates. All of the complexity of one level is summarized and distilled down to a but simple atomic notions which are the primitives of the next layer up. But with so much insulation, it cannot possibly be that the detailed nature of the lower 1 eve1 s can matter to what happens above. This argues against the idea that studying neurons can lead to much of an understanding abotit i.Itelligence. Understanding them beautifully and entirely can n0 snore produce an understanding of intelligence than a complete understanding of transistors can yield insight into how a computer can understand scenes or responds to English. People cannot think if we pluck the neurons out of their brains but if we study only neurons, we have only a slender chance of getting ,at intelligence. Still, come critics argue that computers cannot by intelligent because digital hardware made of silicon can never I?O b;hat Llralns made of neurons do. Their position is weakened by the hierarchy arguncnt and the lack of solid knowledge about what the unthinkably tangled neuropil does. Fig.4. Corrected text produced by algorithm. VI SUMMARY AND CONCLUSIONS We have presented an algorithm for text recognition that is able to utilize top-down knowledge in the form of a lexi- con of legal words (represented as a trie), channel characteristics in the form of probabilities that observed letters are corruptions of other letters (confusion probability table) and two types of bottom-up information: letter shapes (represented as vectors) and the proba- bility of a letter when the previous letters are known (transitional proba- bility table). The algorithm exhibits a significant increase in correction rate over its predecessors that do not use lexical information, and shows no increase in the order of complexity. REFERENCES [I] Peterson, J.L., "Computer programs for detecting and correcting spel- ling errors," Communications of the ACM, 23, 1980, pp. 676-687. [2] Hall, P.A.V., and G.R. Dowling, "Approximate string matching," Com- puting Surveys, 12, 1980, pp. 381-402. [3] Neuhoff, D.L., "The Viterbi algorithm as an aid in text recognition, 11 IEEE Trans. Inform. Theory, IT-21, 1975, PP. 222-228. [4] Knuth, D.E., The art of computer pro- gramming vol. 3: sorting and search- ing, Reading, MA: Addison-Wesley, 1973. [5] Goodman, G., and R. Reddy, "Alterna- tive control structures for speech understanding systems," in Trends in speech recognition, W.A. Lea, ed., Englewood Cliffs, NJ: Prentice-Hall, 1980, pp. 234-246. 161 Srihari, S.N., J.J. Hull and R. Choudhari, "An. algorithm for in- tegrating diverse knowledge sources in text recognition," TR-192. Dent. of Computer Science, SUXY/BuffaG- 1981. 151
1982
46
141
A General Paradigm for A.I. Search Procedures* Dana S. Nau, Vipin Kumar, and Laveen Kanal Computer Science Laboratory Department, ABSTRACT This paper summarizes work on a General Branch and Bound formulation which includes previous formulations as special cases and provides a unified approach to understanding most heuristic search procedures developed in A.I. for Pattern Analysis University of Maryland, College Park, MD 20742 I. INTRODUCTION - A wide class of problems arising in Operations Research, decision making and Artificial Intelligence can be (abstractly) stated in the following form: Given a (possibly infinite) discrete set X and a real-valued objective function F whose domain is X, find an optimal element x* S X such that F(x*) = miniF(x)lx e Exhaustive enumeration of this set X for determining an optimum element is too inefficient for most practical problems. Hence, procedures (e.g. Branch and Bound [LAW66], A*, At)*, Alpha-Beta [NIL80], and B* [BER79]) have been developed to solve various versions of this problem efficiently by utilizing problem-specific knowledge. The underlying idea of such procedures is to decompose (or split) X into smaller and smaller sets. The utility of this approach derives from the fact that in general most of these sets will be pruned (or eliminated from further consideration), whence only a small fraction of X need be enumerated. In this paper we summarize a general abstract formulation Of Branch and Bound(B&B), which extends previous work by many researchers (e.g. Mitten [MIT70]; Lawler and Wood [LAWNS], Balas [BAL681, Smith [SMI791, Reingold, Nievergelt, and Deo [REI77], Horowitz and Sahni [HOR781, Kohler and Steiglitz [KOH741, and Ibaraki [IBA781). Until recently, only upper and lower bounds were used for pruning in B&B procedures (hence the name Branch and Bosjnd --- 1. Many A.I. search procedures (A*, AO*, alpha- * This work was supported by NSF Grant 7822159 ENG- and NSF Grant MCS81-17391 to the Laboratory for Pattern Analysis at the University of Maryland. beta, etc.) use more sophisticated dominance relations to prune the sets, although they perform branching (i.e., set splitting) in a similar form. This caused at least some of the confusion (CPOH721, CHAL711, [MAR78]) as to whether A*, AO* and other heuristic procedures are really Branch and Bound. The addition in B&B of the new concept of dominance in pruning was introduced by Kohler & Steiglitz [KOH74] and further investigated by Ibaraki [IBA781. Our formulation of B&B simplifies and generalizes the idea of dominance: we allow a set to be pruned when at least one of the remaining sets contains an optimal element. All other pruning techniques can be considered as special cases of this approach. We have shown CNAU821 that A* and AO* are special cases of our general B&B formulation. Similar results can be given for SSS*, Alpha-Beta, B*, etc. ([KAN8ll,[KUM81l,[KUM821). II. THE BASIC CONCEPT OF GENERAL BRANCH & BOUND - ------ - Our basic concept of General Branch and Bound is the procedure below. (comments are indicated by double slashes (ll//fl)): procedure PO: 1. ACT := EX) // ACT is the current active set // 2. loop 3. if ACT = EZI for some Z and Z is a singleton {z) then 4. return z 7. endif SEL := // SPL := select(ACT) select some of the sets in ACT // split(SEL) 8. ACT := // // // // 9. repeat end PO // split the sets in SEL // prune((ACT-SEL) U SPL) remove the selected sets from // ACT, replace them by the newly // generated sets, and then prune // unneeded sets from ACT // ACT, the subsets of x. active &, is a collection of select, the selection function, is any function which returns a collection SEL C ACT. The domain of select is the set of all possible values which ACT might have at line 6 of PO. 120 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. split, the splitting function, has as its domain the set of all possible values which the collection SEL might have at line 7 of PO. split(SEL) returns a collection SPL of subsets of X such that-- 1. every set in SPL is a subset of some set in SEL; 2. u iY' I Y' 6 SPL) q U EY 1 Y 6 SEL); i.e., the sets in SPL contain precisely those elements which are members of the sets in SEL. prune, the pruning function, has as its domain the set of all possible values which the collection of sets R= (ACT-SEL) u SPL might have at line 8 of PO. prune returns a collection of sets R' C R such that min {F(y) I y6Y for some Y6R') = min {F(y) 1 y6Y for some Y6R); l.e., at least one of the minimum elements of R is also present in R'. In CNAU821, this concept is developed more fully by dealing with the ways in which the members of the active set ACT are represented, and the use of problem-dependent auxiliary data in the selection, splitting, and pruning functions. In [NAU82], it is also shown how formulations of B&B in the literature are special cases of GBB. Because the invariance: min{F(x>lx 6 X) = min(F(x)lx 6 Y for some Y 6 ACT) remains true through out the execution of the procedure PO, it is easy to see that, at termination, PO finds an optimum element of X. In various problem domains, it is possible to easily compute lower bounds on the F-values of the elements of subsets of X present in the active list ACT. These bounds can be used to perform pruning by the prune function. Also suppose that: (i) the lower bound on a singleton set {x} is always F(x), and (ii) the selection function is best-first, i.e., the set chosen from ACT by the function select is always the one having the least lower bound. Then it can be shown that the first singleton set selected by the select function will be an optimum element of X. This makes the procedure even more efficient. In this case, PO can be rewritten as follows. procedure P3B: //best-first GBB// 1. ACT3 := {Xl 2. loop //the main loop// Z: if ACT3 =0 then return VnknownV1 endif SEL3 := select3(ACT3) 2 if SEL3 is a singleton (x) and goal(x) then return x ii: else SPL3 := split3(SEL3) 9. ACT3 := prune3tACT3 - SEL3, SPL3) 10 endif 11. repeat end P3B III. A" The well-known A* algorithm [NIL~OI is a procedure for finding a least-cost path on a graph. To consider A* as a special case of GBB, we note that each node on the OPEN list of A* actually represents a path P from the source node to n. The set X consists of all paths from the source node to any goal nodes, and P represents the subset of X consisting of all extensions of P to goal nodes. This allows A* to be rewritten as an instantiation of P3B as follows. procedure P7: // A*, rewritten // 2. ACT7 := list containing the null path from s to s 2: GEN7 := NIL while ACT7 f NIL do 5. IPI := select7tACT7) // select first member P of ACT7 // 6. insert P into GEN7 ii: if goal7(P) then return P 9. else 10 SPL7 := split7(iPj) 11 ACT7 := ACT7 - IPI 12 for every path Pn in SPL7 do 13 for every Q in ACT7 or GEN7 do 14 if tip(Q)=n and L7(Q) 5 L7(Pn) then 1 I- ,,3 goto PRUNE // prune Pn // i \5 elseif tip(Q)=n and L7(Pn) < L7(Q) 17 then call remove7(Q) 18 endif 19 endfor 20 parent := P 21 ACT7 := insert7(Pn,ACT7) // insert Pn into ACT7 after all // //nodes n such that f'(n) < f'(Pn)// 22. PRUNE: endfor 23. endif 24. endwhile 25. return lVunknownV1 end P7 procedure remove7(P) 1. if P 6 ACT7 then remove P from ACT7 endif 2. if P 6 GEN7 then remove P from GEN7 endif 3. for every Q such that parent(Q)=P do 4. call remove7(Q) 5. endfor end (1) (2) (3) (4) remove7 The active list (ACT7), which corresponc: +, '? the OPEN list in the usual formulation of A", is a set of paths from the source node to various nodes in the graph. goal7(P) holds only if P is a path from the source node to a goal node. select7(ACT7) returns the first member of ACT7. This is the path P in ACT7 having the least lower bound L7(P), where L7(P) = cost(P) + h(tip(P)), and h is the usual A* heuristic function. split7(P) returns the paths created by expanding the tip node of P. 121 (5) insert7(P,ACT7) inserts P into ACT7 just after the last path Q in ACT7 such that L7(Q) < L7(P). It is shown in [NAU82] that the pruning done in P7 satisfies the properties of a pruning function. IV. AO' - - AO* [NIL801 is a procedure for finding a least-cost solution graph in a hypergraph (a formalization of an AND/OR graph). To consider AO* as a special case of GBB, we note that the search graph maintained in AO* actually represents a collection of partial solution graphs. The set X is the set of all complete solution graphs of the hypergraph searched, and each partial solution graph P represents the subset of X consisting of all extensions of P to complete solution graphs. The partial solution graph found in AO* by tracing the marked nodes is the one having the least lower bound. Thus AO" can be rewritten as an instantiation of P3B as follows. procedure P9: //AO*, rewritten// 1. ACT9 := the partial solution graph containing only the source nodes 2. loop //the test below will never succeed, and is included merely to illustrate that P9 is an instantiation of P3B// 3. if ACT9 =,0'then return "unknown" endif 4. SEL9 := selectg(ACT9) 5. if SEL9 is a singleton (r} and goal (r) then 6. return r ii: else SPL9 := splitg(SEL9) 9. ACT9 := pruneg(ACT - SEL9, SPL9) 10. endif 11. repeat end P9 The functions used above are defined as follows. (1) The active set (ACT9), which corresponds to the search graph G in Nilsson's formulation of AO*, is the set of all partial solution graphs in G. (2) goal9(P) holds only if P is a complete solution graph. (3) select9 returns the member P of ACT9 having the least lower bound L9(P). This happens to be the partial solution graph found in Nilsson's version of AO* by tracing the marked connectors. (4) L9(P) (the 1 ower bound mentioned in item 3) is the sum of all of the arc costs of P, plus the sum of the h-values of the tip nodes of P. This is the same as the value q(P) maintained by Nilsson. (5) split9(SEL9) returns the set of all partial solution graphs in SEL9 which contain n, where n is the node select in Nilsson's version of AO". (6) pruneg(ACT9 - SEL9, SPL9) returns the set of all partial solution graphs in the search graph formed by expanding the node n selected in Nilsson's version of AO*. V. CONCLUSIONS - We have summarized our work showing that the A.I. search procedures A* and AO* are special instances of our general branch and bound formulation. It can be shown that a number of other A.I. procedures are also special cases of GBB. It is possible to visualize many variations of existing search algorithms being generated from this Branch and Bound paradigm, which provides a theoretical basis for a better understanding of the performance of such algorithms and the relationships among them (e.g., [KAN81l,[KUM81l,[KUM821). In particular, we conjecture that all procedures for top-down search of problem reduction representations can be examined and understood as instantiations of this General Branch and Bound procedure. REFERENCES BAL68 BER79 HAL71 HOR78 IBA77 IBA78 KAN79 KAN81 Balas, E. A Note on the Branch-and-Bound Principle. Operations Research I6 (19681, 442-444. Errata p.886. Berliner, H. The B* Tree Search Algorithm: A Best-First Proof Procedure. Artificial Intelligence 12 (1979), 23-40. Hall, P. A. V. Branch-and-Bound and Beyond. Proc. Second Internat. Joint Conf. Artif. -- Intell. (19711, 641-658. - - Horowitz, E. and Sahni, S. Fundamentals of Computer Algorithms. Computer Science Press, Potomac, MD, 1978. Ibaraki, T. The Power of Dominance Relations in Branch and Bound Algorithms. 2. ACM 24 (19771, 264-279. Ibaraki, T. Branch-and-Bound Procedure and State-Space Representation of Combinatorial Optimization Problems. Information and Control 36 (19781, l-27. Kanal, L. Problem-Solving Models and Search Strategies for Pattern Recognition. IEEE Trans. Pattern Analysis and Machine Intell. -- 1 (1979), 193-201. - - Kanal, L. and Kumar, V. A Branch and Bound Formulation for Sequential and Parallel Game Tree Searching. Proc. Seventh International -- Joint Conference on Artificial Intelligence, Vancouver (AugustT9811, 569-571. 122 KAN8lb Kanal, L. and Kumar, V. Parallel Implementations of a Structural Analysis Algorithm. Proc. IEEE Computer Society -- Conf. Pattern Recognition and Image Processing, Dallas (Aug. lg81), 452-458. KOH74 Kohler, W. H. and Steiglitz, K. Characterization and Theoretical comparison of Branch-and-Bound Algorithms for Permutation Problems. J.ACM 21 (1974) 140- -- 156. KUM81 Kumar, V., and Kanal, L. Branch and Bound Formulations for Sequential and Parallel And/Or Tree Search and Their Applications to Pattern Analysis and Game Playing. submitted for publication 1981. KUM~~ Kumar, V., Nau, D. and Kanal, L. A General Model for Problem Reduction and Game Tree Search. working paper 1982. LAW66 Lawler, E. L., and Wood, D. E. Branch-and- Bound Methods: A Survey. Operations Research 14 (1966), 699-719. MAR73 Martelli, A. and Montanari, U. Additive AND/OR Graphs. Proc. Third Internat. Joint -- Conf. Artif. Intell. --- (19731, l-11. - MAR78 Martelli, A. and Montanari, U. Optimizing Decision Trees through Heuristically Guided Search. Comm. ACM 21 (1%'8), 1025-1039. -- MIT70 Mitten, L. G. Branch and Bound Methods: General Formulations and Properties. Operations Research 18 (1970)) 24-34. Errata in Operations Research 19 (19711, 550. NAU82 Nau, D., Kumar, V. and Kanal, L. General Branch & Bound, and Its Relation to A* and AO". working paper,1982. ~1~80 Nilsson, N. Principles of Artificial Intelligence. Tioga Publ. co., Palo Alto, CA, 1980. POH72 Pohl, I. Is Heuristic Search Really Branch and Bound? Proc Sixth Annual Princeton --- Conf. Inform. Sci. and Systems (19721, 370- 33-I---- RIE77 Reingold, Nievergelt and Deo N. Combinatorial Optimization. Prentice Hall, 1977. SM179 Smith, D. R. On the -- Computational Complexity of Branch and Bound Search 7- Strategies, Ph.D. Dissertation, Duke Univ., Durham, NC, 1979. Tech. Rep. NPS 52-79-114, Naval Postgraduate School, Monterey, CA, 1979. 123
1982
47
142
UNIFYING DATA-DIRECTED AND GOAL-DIRECTED CONTROL: An Example And Experiments Daniel D. Corkill, Victor R. Lesser, and Eva Hudlicka Computer and Information Science Department University of Massachusetts Amherst, Massachusetts, 01003 ABSTRACT Effective control in a multi-level cooperating knowledge source problem solver (such as Hearsay-II) requires the system to reason about the relationships among competing and cooperating knowledge source (KS) instantiations (both past and potential) that are working on different aspects and levels of the problem. Such reasoning is needed to assess the current state of problem solving and to develop plans for using the system's limited processing resources to the best advantage. The relationships among KS instantations can be naturally represented when KS activity is viewed simultaneously from a data-directed and a goal-directed perspective. In this paper we show how data- and goal-directed control can be integrated into a single, uniform framework, and we present an example and experimental results of sophisticated focusing using this framework. I INTRODUCTION The multi-level cooperating knowledge source model of problem solving, as posited by the Hearsay-II architecture, poses interesting control problems. Effective control using such a problem solving approach requires the control component to reason about the relationships among competing and cooperating knowledge source (KS) activities (both past and potential) and among KS activities working on different aspects and levels of the problem. Such reasoning is required in order to assess the current state of problem solving and to determine how the system should use its limited processing resources to the best advantage. devel For example, the control component needs to .op and reason about sequences of KS activities The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the National Science Foundation, the Defense Advanced Research Projects Agency, or the U.S. Government. relating to a particular approach to one aspect of the problem. This allows' these activities to be scheduled as a coherent unit and to be eliminated as a unit if the approach proves unproductive. A second example is the implementation of an opportunistic scheduling strategy where the partial solution of a high-level problem is used to focus the system on low-level activities required to solve the remainder of the problem (focus of attention through subgoaling). Other examples are selecting a specialized KS to resolve the system's confusion over competing partial solutions and instantiating activities to produce input data necessary for performing an important activity (precondition-action backchaining). All of these examples rely on the control component's ability to evaluate the the potential effects of KS activities from a non-local context. The data-directed and instantaneous scheduling mechanisms developed for the Hearsay-II speech understanding system could reason about KS relationships in only a rudimentary way [5l. That level of reasoning was sufficient for the KSs used in the final configuration of that speech system t33. However, the limitations of this rudimentary control have become increasingly apparent to us and others as the multi-level cooperating KS model has been applied to different task domains [ll. Nii and Feigenbaum with SU/X [81, Engelmore and Nii with SU/P [21, and Erman, et al., with Hearsay-III [4l recognized these limitations and consequently have developed systems with enhanced control capabilites. These enhancements permit more sophisticated control over scheduling by allowing the KS scheduling queues to be manipulated under program control. However, these modifi- cations do not explicitly formalize the relation- ship among KS activities. Such relationships are left to the user to build. We feel these relation- ships need to be explicitly formalized if domain- independent control strategies are to be developed. The premise of this paper is that these relation- ships become apparent in a control framework in which KS activity can be viewed simultaneously from a data-directed and a goal-directed perspective. In this paper, we first review the data- directed scheduling mechanisms of Hearsay-II. Next, we indicate how data-directed and goal- directed control can be integrated into a single, uniform framework through the generation of goals from data-directed events and we show the structural relationships among KS activities that 143 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. this framework creates. We show an example of this framework performing sophisticated focusing of KS activity and present experimental results that show the advantages of the unified approach over a purely data-directed approach to control. II DATA-DIRECTED HEARSAY-II SCHEDULING Figure 1 presents a high-level schematic for data-directed control in Hearsay-II. KSs are invoked in response to particular kinds of changes on the blackboard, called blackboard events. The blackboard monitor knows which events at which levels interest each KS. The occurence of a black- board event does not guarantee that there is, in fact, sufficient information on the blackboard for a KS to be executed. The blackboard monitor executes a precondition procedure for each interested KS to make a more detailed examination and, if sufficient information is found, a KS instantiation (KS11 is created and placed onto the scheduling queue. The scheduler calculates a priority rating for each KS1 on the scheduling queue, selecting for execution the one with the highest rating . Execution of the KS1 causes changes to the blackboard which trigger additional blackboard events and the process continues. Al though this data-directed Hearsay-II architecture has many advantages, it is severly limited in its ability to plan its interpretation activities. Scheduling is instantaneous -- only the immediate effects on the state of problem solving are considered. There is no inference process used to determine the effects of executing Blackboard _---- ---- Data e a KS beyond its immediate effects on the system state. Another limitation of this scheduling approach occurs when the precondition procedure cannot find sufficient information for the KS to be instantiated. The scheduler does not record which information is missing and has no way of re-evaluating the priorities of a pending KS that can generate the missing information or instantiating the KS if it is not already present. In the data-directed architecture it is assumed that if the information is really important it will eventually be generated based on normal scheduling considerations. To remedy these control limitations within the basic Hearsay-II architecture, we next present an augmented version of the architecture that integrates data- and goal-directed control of KS activity via the generation of goals from black- board events. Within this augmented architecture, a wide range of scheduling paradigms can be implemented efficiently: from those based on an instantaneous, statistical and data-directed approach to those based on complex planning of goal-directed activity. In this way, the system developer can tailor the control to the specifics of the task domain and KS configuration. III GOAL-DIRECTED HEARSAY-II SCHEDULING Figure 2 presents a high-level schemantic of Hearsay-II as augmented to accomodate goal-directed scheduling. A second blackboard, the goal black- board, is added that mirrors the original (data) blackboard in dimensionality. The goal blackboard Goal + KS Table r Goal + Subgoal? Table Data - h Control 4---s Event+ GOalS Table FIGURE 1: Data-Directed Hearsay-II Architecture FIGURE 2: Goal-Directed Hearsay-II Architecture 144 contains goals, each representing a request to create a particular state of hypotheses on the data blackboard in the (corresponding) area covered by the goal. For example, a simple goal would be a request for the creation of a hypothesis with specific attributes above a given belief in a particular area of the data blackboard. The integration of data-directed and goal- directed control into a single, uniform framework is based on the following observation: The stimulation of a precondition process in the data-directed architecture not only indicates that it may be possible to execute the knowledge source, but that it may be desirable to do so in order to achieve the goal implicit in the output of the KS. In order to make these implicit goals explicit, the event-to-KS mapping contained in the blackboard event table is split into two steps: event-to-goals and goals-to-KSs. The blackboard monitor watches for the occurrence of a data blackboard event, but instead of placing KSIs on the scheduling queue, it uses the event-to-goals mapping to determine the appropriate goals to generate from the event and inserts them onto the goal blackboard. Goals may also be placed on the goal blackboard from external sources. Placing a high-level goal onto the goal blackboard can effectively bias the system towards developing a solution in a particular way. A new control component, the planner, is also added to the architecture. The planner responds to the creation of goals on the goal blackboard by developing plans for their achievement. In their simplest form these plans consist of goal/KS1 relationships which specify one or more KSs which can potentially satisfy the created goals. The planner uses the goal-to-KS mapping to create these KSIs. More sophisticated planning activities consist of building goal/subgoal, precondition goal/KSI, and overlapping goal relationships Cl]. The scheduler uses the relationships between the KSIs and the goals on the goal blackboard as a basis for its scheduling decisions. We have implemented a version of the goal- directed Hearsay-II architecture in a distributed interpretation system which produces a dynamic map of vehicles moving through a geographical area [71. Figure 3 shows how goal-directed focusing can be used in this application to increase the priority rating of low-level KS1 based on the creation of a high-level hypothesis. The processing levels in order of increasing abstraction are: signal location (SL) ) group location (GL), vehicle location (VL), and vehicle track (VT).* The creation of SL hypothesis H:SL:Ol on the data blackboard causes the planner to create GL goal G:GL:Ol on the goal blackboard. This goal indicates that the system should attempt to form a GL hypothesis using H:SL:Ol. The planner next +. Additional processing levels used in the system are omitted here. GOAL BLACKBOARD DATA BLACKBOARD FIGURE 3: An Example instantiates KS1 S:SL:GL:Ol to try to achieve this goal. The rating of a KS1 is a function of the belief of its stimulus hypotheses and the priority rating of its stimulus goals (if any). The priority of a goal is a function of the belief of its stimulus hypotheses, its level on the black- board, and its relationships with other goals. Assume that H:SL:Ol is weakly believed and consequently S:SL:GL:Ol is given a low execution rating. Processing continues with other SL hypoth- eses and eventually creates a VT hypothesis H:VT:Ol with a moderately high belief. The creation of this hypothesis causes a number of goals to be created, including the goal shown in the figure, G:VT:02. This goal indicates that the system should attempt to extend H:VT:Ol. The planner uses domain knowledge in the form of a goal-to-subgoal mapping for decomposing this high-level goal into a SL level subgoal, G:SL:03. This subgoal indicates in what area is necessary to have SL hypotheses in order to eventually extend the VT. hypothesis. Subgoal G:SL:03 is given the same priority rating as its parent goal G:VT:02. The planner finds that H:SL:Ol has already been created in this area and can satisfy G:SL:03. The planner then creates subgoal G:GL:04 and finds that goal G:GL:Ol overlaps with it. The planner adds G:GL:04 as a second stimulus goal of the low-rated KS1 S:SL:GL:Ol. The addition of the higher priority goal causes the rating of the KS1 to be increased based on its potential contribution to the track extension goal G:VT:02. Subgoaling can reduce the combinatorics often associated with the top-down elaboration of hypoth- eses. Top-down elaboration is generally used for two different activities: the generation of the lower-level structure of a hypothesis (to discover details) and the determination of which existing low-level hypotheses should be driven-up to create or verify a high level hypothesis based on 145 expectations (for focusing). Top-down elaboration of hypotheses is best suited only to the first activity -- subgoaling on the goal blackboard is a more effective way to perform expectation-based focusing. When hypothesis elaboration is used as a focusing technique, the elaboration process has to be conservative in order to reduce the number of hypotheses generated and to reduce the possibility of generated low-level hypotheses being used as “real data” by knowledge sources in other contexts. Because subgoals are distinct from hypotheses, they can be liberally abstracted (such as supplying a range of values for an attribute) and under- specified (such as supplying a “don’t care” attribute). Therefore’ subgoaling the high-level goal of generating the expectation-based hypothesis (including the use of “level-hopping”) avoids the combinatorial and context confusion problems associated with the use of top-down hypo the si s elaboration for focusing. Planning operations, such as subgoaling -and precondition goal/KS1 chaining, permits sophis- ticated opportunistic focusing to be performed by the planner and scheduler. Highly rated low-level hypotheses can be driven up in a data-directed fashion while high-level goals generated from strong expectations can be subgoaled downward to control low-level synthesis activities (as in the above example) . Similarly, processing in low rated areas can be stimulated if a highly rated knowledge source requires the creation of a precondition goal in that area. IV FOCUS OF ATTENTION EXPERIMENTS USING SUBGOALING We are beginning to experiment with the use of subgoaling as a focus of attention mechanism. Our goal is a set of rigorous experiments that quantify those situations in which subgoaling outperforms a simpler ’ purely data-directed approach. The characteristics we are varying include the confusability of the input data and the power of the KSs to resolve this confusability and to make effective predictions. To vary the power of KSs we are using a semi-formal model for simulating KSs of different power through the use of an oracle C6.73. We also plan to vary the weighting factor used by the scheduler for evaluating KSIs. This weighting determines, in part, the balance between data- directed and goal-directed focusing by adjusting the relative contributions of the priority of goals that are potentially satisfied by a KS1 and the predicted quality of the hypotheses produced by the KSI. Figure 4 illustrates a simple scenario in which subgoaling high-level expectations effec- tively reduces the amount of processing required to generate the correct answer. In this figure, there are two tracks: one representing the signals from an actual vehicle and the other a false ghost vehicle. The actual track data consists of a sequence of high belief SL hypotheses surrounding an area of low belief SL hypotheses. The ghost track consists of a uniform sequence of medium be1 ief SL hypotheses. The two tracks are LEGEND True Signal ---mm.--- Ghost SiEnsl .-.-. -.- False Signal Low Belief , &w Medium Belief n High Belief // 0. TRACK HYPOTHESES CREATED WITHOUT SUBGOALING FIGURE 4: The Experiment sufficiently close that the system can produce track hypotheses composed of locations from both the actual and the ghost track. For simplicity, each vehicle is assumed to emit a single signal frequency. Without focusing through the creation of subgoals, the system executes 54 KSIs to completely generate the correct track. With subgoal focusing 146 based on subgoaling at the VT level the system requires 28 KSIs. This significant speedup comes REFERENCES from the system avoiding considerable work in attempting to develop track hypotheses that integrate high belief data from the actual track with medium belief false data and by inter- connecting medium belief false data before extending high belief actual data with low belief actual data. In purely data-directed scheduling, these activities seem reasonable from the scheduler's local view of the effects of KS 1. Daniel D. Corkill and Victor R. Lesser. A goal-directed Hearsay-II architecture: Unifying data-directed and goal-directed control. Technical Report 81-15, Computer and Information Science Department, University of Massachusetts, Amherst, Massachusetts, 01003, June 1981. actvity. However, by affecting the decisions of the scheduler with subgoals which represent 2. Robert S. Engelmore and H. Penny Nii. predictions, much of the system work involved in A knowledge-based system for the interpretation processing data that partially correlates with the of protein x-ray crystallographic data. actual track or that is of medium belief can be Technical Report Stan-CS-77-589, Computer avoided. Science Department, Stanford University, Stanford, California, February 1977. V CONCLUSION 3. 1 ^e LCC 2. &man, Fredirlck Hayes-Roth, Victor R. Lesser, and D. Raj Reddy. We have shown how data- and goal-directed The Hearsay-II speech understanding system: control can be naturally integrated into a single - Integrating knowledge to resolve uncertainty. uniform control framework, permitting the develop- Computing Surveys 12(2):213-253, June 1980. ment of a wide range of different scheduling and planning strategies for controlling knowledge 4. Lee D. Erman, Philip E. London, and Stephen F. source (KS) activity. This framework increases the Fickas. number of task domains in which the multi-level, The design and an example use of Hearsay-III. cooperative KS model of problem-solving (used in Proceedings Seventh International Joint the Hearsay-II architecture) is an effective Conference on Artificial Intelligence, pages approach. 409-415, August 1981. We have also presented an example and an 5. Frederick Hayes-Roth and Victor R. Lesser. experiment indicating the potential advantages of a Focus of attention in the Hearsay-II system. sophisticated focusing strategy based on subgoaling Proceedings Fifth International Joint Conference over a purely data-directed strategy which does not on Artificial Intelligence, pages 27-35, construct complex relationships among KS August 1977. activities. 6. Victor R. Lesser, Jasmina Pavlin, and Scott Reed. Quantifying and simulating the behavior of knowledge-based interpretation systems. In Proceedings of the- First Annual National Conference on Artificial Intelligence, pages 111-115, August 1980. 7. Victor Lesser, Daniel Corkill, Jasmina Pavlin, Larry Lefkowitz, Eva Hudlicka, Richard Brooks, and Scott Reed. A high-level simulation testbed for cooperative distributed problem solving. Technical Report 81-16, Department of Computer and Information Science, University of Massachusetts, Amherst, Massachusetts, 01003, June 1981. 8. H. Penny Nii and Edward A. Feigenbaum. Rule based understanding of signals. In D. A. Waterman and Frederick Hayes-Roth, editors, Pattern-Directed Inference Systems, pages 483-501, Academic Press, 1978. 147
1982
48
143
DIAGNOSIS BASED ON DESCRIPTION OF STRUCTURE AND FUNCTION Randall Davis*, Howard Shrobe*, Walter Hamscher* Ksren Wieckert * , Mark Shirley*, Steve Polit * * * The Artificial Intelligence Laboratory Massachusetts Institute of Technology Abstract While expert systems have traditionally been built using large coliections of rules based on empirlcal associations, interest has grown recently in the use of systems that reason from representations of structure and function. Our work explores the use of such models in troubleshooting digital electronics. We describe our work to date on (i) a language for describing structure, (ii) a language for describing function, and (i/i) a set of prlnctples for troubleshooting that uses the two descriptions to guide its investigation. In discussing troubleshooting we show why the traditional approach --- test generation --- solves a different [JrdJklll dnti vve &SCllSS a Ilumber of its pIdC,hd ShOrt~Olllill~S. We consider next the style of debugging known as violated expectations and demonstrate why it is a fundclmental advance over traditional test generation. Further exploration of this approach. however, demonstrates that it is incapable of dealing with commonly known classes of faults. We explain the shortcoming as arisirlg from the use of a fault model that is both implicit and inseparable from the basic troubleshooting metl~odology. We argue for the importance of fault models that are explicit, separated from the troubleshooting mechanism, and retractable in much the same sense that inferences are retracted in current systems. Introduction While expert systems have traditionally been built using large collections of rules based on empirical associations (e.g., [S]) interest has grown recently in the use of systems that reason from representations of structure and function (e.g., [8], [7], [5]). Our work explores the use of such models in troubleshooting digital electronics. We view the task as a process of reasoning from oenavlor to structure, or more precisely, from misbehavior to structural defect. We are typically presented with a machine c-xhibitmg some form of incorrect behavior and must infer the structural abberation that is producing it. The task is interesting nnd difficult because the devices we want to examine are complex and because there IS no well developed theory of diagnosis for them. Our ultimata goal is to provide a level of performance comparabie to that of an experienced engineer, including reading and reasoning from schematics; selecting, running, and This report describes research done at the Artificial lntclligencc Laboratory of the Massachusetts Institute of Technology. Support for the labora!ory’s artificial intclllgcnce research on electronic Croubleshoot~ng is provided in part by the Digital Equipment Corporation. **Digital Equipment Corp. interpreting the results of diagnostics: selecting and interpreting the results of input test patterns, etc. The Initial focus of our work has been to develop three elements that appear to be fundamental to all of these capabilities. We require (i) a language for describing structure, (ii) a language for describing function, and (/;I) a set of prlnciplcs for troublesllooting that uses the two descriptions to guide its invosligation. This paper describes our progress to date on each of tnuse elements. In discussing iroubleshosi;ng we St IOW why tire traditional approach to reasoni,lg ahout digital electronics --- test generation - .- solves a diflercnt problem and we discuss a number of its prPctic.31 shortcomings. We considar pext the style of c!ebugging known as violated expectations and demonstrate why it is a fundamental advance over traditional test generation. Further explcration of the violated expectation approach, however, demonstrates that it is incapable of dealing with commonly known classes of faults. We explain the shortcoming as arising from the us c of a fault model that is both implicit and inseparable from the basic troubleshooting methodology. We argue for the importance of fault models that are explicit, separated from the troubleshooting mechanism, and retractable in InUCh the same sense that inferences are retracted in current systems. Structure Description By structtire description we mean topology --- the connectivity of components. A number of structure description languages have been developed, but most, having originated in work or; machine design, deal exclusively with fu~~tiOff~/ components, rarely making any provision for describing pl~y~ical organization.’ In doing machine diagnosis. however, we are dealing with a collection of hardware whose functional and physical organizations are both important. The same gate may be both (i) functionally a part of a multlplexor, which is functionally a part of a datapath, etc.. and (ii) physically a part of chip E67, which is physically part of board 5, etc. Both of theso hierarchies are relevant at different times in the diagnosis and both are included in our language. We use the functional hierarchy as the primary organizing principle because, as noted, our basic task involves rensonirlg from function to structure rather than the other way around.’ The functional organization is also typically richer than the structural (more levels to the hierarchy, more terms in the 1. This is curiously true even for languages billing themselves as computer hardware descripfron languages. They rarely mention a piece of physical hardware. 2. ‘Gc arti iypicnliy confronted with a machine that misbehaves, not one that has visible structural damage. 137 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. vocabulary), and hence provides a useful organizing principle for the largc number of individual physical compol\ents. Compare, for example, the functional orqanization of a board (e.g., a memory controller with cache, address translation hardware, etc.) with the physical organization (1 pc board, 137 chips). The most basic level of our description vocabulary is built on three concepts: modules, ports, and ternjinals (Fig. 1). A module can be thought of as a standard black box. A module has at least two ports; ports are the place where information flows into or out of a module. Every port has at least two terminals, one terminal on the outside of the port and one or more inside. Terminals are primitive elements; they store logic levels representing the information flowing into or out of a device through their port, but are otherwise devoid of substructure. dinput-k4D”~ -1 sum”“” Figure 1 - The basic terms used in structure description. Two modules are attached to one another by superimposing their terminals. In Fig. 1, for example, wire A is a module that has been attached to input-l of the adder rnodule in this fashion. The language is hierarchical in the usual sense; modules ai any level rnay have substructure. In practice, our descriptions terminate at the gate level in the functional hierarchy and the chip level in the physical hierarchy. since for ollr purposes these are black boxes --- only their behavior (or misbehavior) matters. Fig. 2 shows the next level of structure of the adder and illustrates why ports rnay have multiple terminals on their inside: ports provide the important function of shifting level of abstraction. It may be useful to think of the information flowing along wire A as an integer between 0 and 15, yet we need to be able to map those four bits Into the four single-bit lines insider the adder. Ports are the place where such infornlation is kept. T&y have machinery (described below) that allows them to map information arriving at their outer terminal onto their Inner terminals. The default provided in the system accomplishes the sirnple rnap required in Fig. 2. Since our ultimate intent is to deal with hardware on the scale of a mainframe computer, we need terms in the vocabulary capable of describing levels of organization mole substantial than the terms used at the circuit level. We can, for example, refer to horizontal, vertical, and hitslice organizations, describing a memory, for instance, as “two rows of five 1K ram’s”. We use these specifications in two ways: as a description of the organization of the device and a specification for the pattern of interconnections among the components. Our eventual aim is to provide an integrated set of descriptions that span the lebels of hardware organization ranging from interconnection of individual modules, through higher level of organization of modules, and eventually on up through the register transfer and PMS level [2]. Some of this requires inventing vocabulary like that above, in other places (e.g., PMS) Figure 2 - Next level of structue of the adder. we may able to make use of existing terminology and concepts. The structural description of a module is expressed as a set of commands for building the module. Hence the adder of Fig. 2 is described by indicating how to “build” it (Fig. 3). These commands are then executed by the system, causing it to build data structures that model all the components and connections shown. The resultmg data structures are organized around the individual components. Executing the first expression of Fig. 3, for example, produces 4 data structures that model the individual slices of the adder. (def i nernodul e adder (repeat 4 i (part sl ice-i adder-s1 ice) (run-wire (input-l adder) (input-l slice-i)) (run-wire ( input-2 adder) (input-2 slice-i)) (run-wire (output slice-i) (suln adder)) (repeat 3 i (run-wire (carry-out slice-i) (carry-in sl ice-[i f-l])) )) Figure 3 - Parts are described by a palhname through the part hierarchy, e.g., (input-l adder). (This dcscrlptlon can be abbreviated as a bitslrce organization, but IS expanded here for illustration.) This approach to structure description offers two interesting properties: (a) a natural merging of procedural and object-oriented descriptions, and (b) the use of analogic representations. To see the merging of descriptions, note that we have two different ways of thinking about structure. We describe a device by indicating how to build it (the procedural view), but then want to think about it as a collection of individual objects (the object-oriented view). The first view is convenient for describing structure, the second makes it easy to answer questions about it, questions like connectivity, location, etc., that are irnportant in signal tracing and other troubleshooting techniques. The two descriptions are unified because the system simply “runs” the procedural description to produce the data structures modeling 138 the device. This gives us the benefit of both approaches with no additional effort and no chance that the two will get out of sync. The representation is analogic because the data structures that are built are isomorphic to the structure being described. “Superimposing” two terminals, for instance, is implemented as a merging of the structure representing the terminals. The resulting data structures are thus connected in the LISP sense in the same ways that the objects are connected in Fig. 2. The benefit here is primarily conceptual, it simply makes the resulting structures somewhat easier to understand. Our description language has been built on a foundation provided by a subset of DPL [l]. While DPL as originally implemented was specific to VLSI design, it proved relatively easy to “peel off” the top level of language (which dealt with chip layout) and rebuild on that base the new layers of language described above. Since pictures are a fast, easy and natural way to describe structure, we have developed a simple circuit drawing system that permits interactive entry of pictures like those in Figs. 2 and 4. Circuits are entered with a combination of mouse movements and key strokes: the resulting structures are then “parsed” into the language shown in Fig. 3. Behavior Description A variety of techniques have been explored in describing behavtor, including simple rules for mapping inputs to outputs, petri nets, and unrestricted chunks of code. Simple rules are useful where device behavior is uncomplicated, petri nets are USC Lit f %‘hcre the fclcus is on modeling parallel events, and unrestricted code is often the last resort when more structured forms of expression prove too limited or awkward. Various combmations of these three have also been explored. Our initial implementation uses constraints [lo] to represent behavior. Conceptually a constraint is simply a relationship. The behavior of the adder of Fig. 1, for example, can be expressed by saying that the logic levels of the terminals on ports inpu!- 1, input-2 and sum are related in the obvious fashion. This is an expression of a relationship, not a commitment to a particular computation --- the logic level at any one of the terminals can be computed given the other two. In practice, this is accomplished by defining a set of rules covering all different computations (the three for the adder are shown below) and setting them up as demons that watch the appropriate terminals. A complete description of a module, then, is composed of its structural description as outlined earlier and a behavior description in the form of rules that interrelate the logic levels at its terminals. to get sum from (input-l input-2) do (+ input-l input-2) toget input-l from (sum input-2) do (- sum input-2) to get input-2 from (sum input-l) do (- sum input-l) A set of rules like these is in keeping with the original conception of constraints, which emphasized the non-directional, relationship character of the information. When we attempt to use it to model causality and function, however, we have to be careful. This approach is well suited to modeling causality and behavior in the world of analog circuits, where devices are largely non-directional. But we can hardly say that the last two rules above are a good description of the f~havior of an adder chip --- the device docsn’: do subtraction; putting logic levels zt its output and one input does not cause a logic level to appear on its other input. The last two rules really model the inferences we make about t/le devrce. Hence we find it useful to distinguish between rules representing flow of electrrclfy (digital behavior, the first rule above) and rules representing flow of inference (conclusions we can make about the device, the next two rules). This not only keeps the representation “clean”, but as we will see, it provides part of the foundation for the troubleshooting mechanism. A set of constraints is a relatively simple mechanism for specifying behavior, in that it offers no obvious support for expressing behavior that falls outside the “relation between terminals” view. The approach also has known limits. For example, constraints work well when dealing with simple quan?itizs like pumbcrs o: logic levels, but :Kn into difficulties if it becomes necessary to work with symbolic expressions.3 The approach has, nevertheless, provided a good starting point for our work and offers two important advantages. First, the DPL and constraint machinery includes mechanisms for keeping track of dependency information --- an indication of how the system determined !he value at a terminal --- expressed in terms of what rule computed the value and what other values the rule used in performing its computation. This is very useful in tracing backward to the source of the misbehavior. Second, the system provides machinery for detecting and unwinding contradictions. A contradiction arises if two rules try to set different values for the same terminal. As we illustrate below, the combination of dependency information and the detection of contradictions provides a useful starting place for troubleshooting. Our system design OfferS a number of features which, while not necessarily novel, do provide useful performance. For example, our approach offers a unity of device description and simulation, since the descriptions themselves are “runnable”. That is, the behavior descriptions associated with a given module allow LJS to simulate the behavior of that module; the interconnection of modules specified in the structure description then causes results computed by one module to propagate to another. Thus we don’t need a separate description or body of code as the basis for the simulation, we can simply “run” the description itself. This ensures that our description of a device and the machinery that simulates it can never disagree about what to do, as can be the case if the simulation is produced by a separately maintained body of code. Our use of a hierarchic approach and the terminal, port, module vocabulary makes multi-level simulation very easy. In simulating any module we can either run the constraint associated with the terrninals of that module (simulating the module in a single step), or “run the substructure” of that module, simulating the device according to its next level of structure. Since the abstraction shifting behavior of ports is also implemented with the constraint mechanism, we have a convenient uniformity and economy of machinery: we can enable either the constraint that spans the entire module or the constraint that spans the port. Varying the level of simulation is useful for speed (no need to simulate verified substructure), and provides as well a simple chock on structure and b-zhavior specification: we can compare the results generated by the module’s behavior soecification with those aenerated by the next lower Ievei of 3. What, for example, do we do if we know that the output of an or-gate is 1 but we don’t know the value at either input? We can refrain from making any conclusion about the inputs, which makes the rules easy to write but misses some information. Or we can write a rule which express the value on one input in terms of the value on the other input. This captures the infolmalrou but produces problems when trying to use the resulting symbolic expression elsewhere. simulntion. Mismatches typically mean a mistake in structure specification at the lower level. We believe it is important in this undertaking to include descriptions of both design and implementation, and to distinguish carefully between them. A wire, for example, is a device whose behavior is specified simply as the guarantee that a logic level imposed on one of its terminals will be propagated to the other terminal. Our structure description allows us to indicate the intended direction of information flow along a wire, but our simulation is not misled by this. This is, of course, important in troubleshooting. since some of the more difficult faults to locate are those that cause devices to behave not as we know they “should”, but as they are in fact electrically capable of doing. Our representation machinery allows us to include both design specifications (the functional hierarchy) and implementation (the physical hierarchy) and keep them distinct. Finally, the behavior description is also a convenient nlechanism lor fault insertion. A wire stuck al Lero, for example, is modeled by giving the wire a behavior specification that maintains its terminals at logic level 0 despite any attempt to change them. Bridges, opens, etc., are similarly easily modeled. Troubleshooting The traditional approach to troubleshooting digital circuitry (e.g.? 131) has, for our purposes, a number of significant drawbacks. Perhaps most important, it is a theory of fest generation. not a theory of d~ngnosis. Given a specified fault, it is capable of determining a set of input values that will detect the fau!t (ie, a set of values for which the output of the faulted circuit differs from the output of a good circuit). The iheory tells us how to move from faults to sets of inputs; it provides little help in determining what fault to consider, or which component to suspect. These questions are a central issue in our work for several reasons. First, the level of complexity we want to deal with precludes the use of diagnosis trees, which can require exhaustive consideration of possible faults. Second, our basic task is repair. rather than initial testing. Hence the problem confronting us is “Given the following piece of misbehavior, determine the fault.” We are not asking whether a machine is free of faults, we know that it fails and know how it fails. Given the complexity of the device, it is important to be able to use this information as a focus for further exploration. A second drawback of the existing theory is its use of a set of explicitly enumerated faults. Sinze the theory is based on boolean logic, it is ztror,$y oric-n:cd toward faults whose behavior can be modeled as some form of permanent binary value, typically the result of stuck-ats and opens. One consequence of this is the paucity of useful results concerning bridging faults. A response to these problems has been the use of what we may call the “violated expectation” approach ([6], [4], [7]). The basic insight of the technique is the substitution of violated expectations for specific fault models. That is, instead of postulating a possible fault and exploring its consequences, the technique simply looks for mismatches between the values it expected from correct operation and those actually obtained. This allows detection of a wide range of faults because misbehavior is now simply defined as anything that isn’t correct, rather than only those things produced by a struck-at on a line. This approach has a number of advantages. It is, first of all, fundamentally a diagnostic technique, since it allows systematic isolation of the possibly faulty devices. and does so without having to precompute fault dictionaries, diagnosis trees, or the like. Second, it sppcars to make it unnecessary to to specify a set of expected fatiits (we commelrt further on this beluw). As a result, it can &l&t a IINK~ wi&L’ range of fr;lulls, including any systematic misbehavior exhibited by a single component. The approach also allows natural ust? of hierarchical descriptions, a rnarked advantage for dealing with complex structures. This approach is a good starting point, but has a number of important limitations built into it. We work through a simple exarnple to show the basic idea and use the same example to comment on its shortcomings. Consider the circuit in Fig. 4.4 If we set the inputs as shown, the behavior description s will indicate that we should expect 12 at F. If, upon measuring, we find the value at F to be 10, we have a conflict between observed results and our model of correct behavior. We check the dependency record at F to find that the value expected there was d&t-mined using the behavior rule for the adder and the values emerging from the first and second multiplier. One of those three must be the source of the conflict, so we have three hypotheses: either the adder behavior rule is inappropriate (ie, the first adder is broken), or one of the two mputs did not have the expected values (and the problem lies further back). **3 WILT-1 @) x El.2 ADD-1 ----.F(12) [lOI c-2 - MULT -2 (6) y D.3 ’ ADD-2 -Ii Cl21 z - MULT - 3 W E. 3 expected-( ) actual- -[ ] Figure 4 - Troubleshooting example using violated expectations. If the second input to adder-l was good, then the first input must have been a 4 (reasoning from the result at F, valid behavior of the adder, and one of the inputs). But that COnfktS with our expectation that it should be a 6. That expectation was based on the behavior rule for the multiplier and the expected value of its inputs. Since the- inputs to the multiplier are prirnitive (supplied by the user), the only alternative along this line of reasoning is that the multiplier is broken. tience hypothesis # 2 is that adder-l is good and multiplier-l is faulty. If tile first input to adder-l is good, then the second input must have been a 4 (suggesting that the second multiplier might be bad). But if that were a 4, then the expected value at G would be 10 (reasoning forward throligh tile second adder). We can check this and discover in this case that the output at G is 12. Hence the value on the output of the second rnultiplier can’t be 4, 4. As is common in the field, we make the usual assullptions ihat there is 011iy a sil,yle source of err01 and ii;e erli3! is Ilot trdrisieilt. Both or these are important in the reasoning that follows. it must be 6, hence the second multiplier can’t be causing the current problem. So we are left with the hypotheses that the malfunction lies in either the first multiplier or the first adder. The diagnosis proceeds in this style, dropping down levels of structural detail as we begin to isolate the source of the error. This approach is a useful beginning, but has some clear shortcomings that result from hidden assumptions about faults. Consider the slightly revised example shown in Fig. 5. I?easoning just as before,’ the fault at F leads vs to suspect adder-l. But if adder-l is faulty, then everything else is good. This implies a 6 on lines y and z, and (reasoning forward) a 12 at G. But G has been measured to be 6, hence adder-l can’t be responsible for the current set of symptoms. If adder-l is good, then the fault at F rnlght result from bad inputs (Itnes x and y). If the fault is on x, then y has a 6. But (reasoning forward) this means a 12 at G. Once again we encounter a contradiction and eliminate line x as a candidate. Wc turn to line y, postulate tl:,?i it is 0. This is consistent with the faults at both F and G, and is in fact the only hypothesis we can generate. A.3 - MULT-1 @) x 6.2 ADD-l -F (12) c61 C-2 MULT-2 J6’ y D-3-- ADD -2 -G (12’ z [61 Em3 MULT - 3 (6) expected-( ) actual - -[ ] Figure 5 - Troublesome troubleshooting example. The key phrase here is “the only hypothesis we can generate”. In fact, there is another quite reasonable hypothesis: the third multiplier might be bad.’ But how could this produce errors at both F and G? The key lies in being wary of our models. I he tllought that digital devices have input and output ports is a convenient abstractlon, not an clectncal reality. If, as sometimes happens (due to a bent pin, bad socket, etc.), a chip fails to get power, its inputs are no longer guaranteed to act unidirectionally 35 inputs. If the third multiplier were a chip that failed to ye: power, it might not only send out a 0 along wire z, but it might also pull down wire C to 0. Hence the symptoms result from a single point of failure (multiplier-3), but the error propagates along an “input” line common to two devices. The problem with the traditional violated expectation approach lies in its implicit acceptance of unidirectional ports and the reflection of that acceptance in the basic dependency-unwinding machinery. That machinery implicitly believes that inputs only get information from outputs --- when checking the inputs to multiplier-l, we said they were “primitive”. We looked only at the input terminals A and C, never at the other 5. The eager reader has no doubt already chosen a likely hypothesis. We go through the reasoning in any case, to show that the method outlined generates ihe same hypothesis and is in fact simply a more formal way of doing what we often do intuitively. 6. Or the first. end of the wire at multiplier-3. Bridges are a second common fault that illustrates an interesting shortcoming in the contradiction detection approach. The reasoning style used above can never hypothesize a bridging fault, again because of implicit assumptions about the model and their subtle reflection in the method. Bridges can be viewed as wires that don’t show up in the design. But the traditional approach makes an implicit “closed world” assumption --- the structure description is assumed to be complete and anything not shown there “doesn’t exist”. Clearly this is not always true. Bridges are only one manifestation; wiring errors during assembly are another possibility. Let’s review for a moment. One problem with the tradition31 test generation technology was its use of a very limited fault model. The contradiction detection approrch improves on this substantially by defining a fault as anything that produces behavior different from that expected. This seems to be perfectly general, but, as we illustrated, it is in fact limited in some irnportant ways. So what do we do? If wi toss out the assumption that input and output ports are unidirectional, we take care of that class of errors; the cost is generating more hypotheses. Perhaps we can deal with the increase. If we toss out the closed-world assumption and admit bridges, we’re in big trouble. Even it we switch to our physical representation’ to keep the hypotheses constrained to those that are pl\ysically plausible, the number is vast. If we toss out the assumption that the device was wired as the description indicates, we’re in big trouble even if we invoke the single point of failure constraint and assume only one such error But some failures are due to multiple errors... and transients are an important class ot errors . . . and ,... Wait, down this road appears to lie madness, or at the very least, chaos. What can we do? We believe that the important thing to do is what human experts seem to do: Make all the simplify/q assumptions we have to to keep the problem tractable. Be explicitly aware of what those assumptions are. Be aware of the effect the assumptions have on car&/date generation and testing. Be able to discard each assumption in turn if it proves to be misleading. The key, it seems, lies in determining what are the appropriate layers of assumptions for a given domain and in determining their effects on the diagnostic process. In our domain, for example, a sample list of the assumptions underlying correct function of a circuit might be: no wires are stuck no wires present other than those shown ports functioning in specified direction actual assembly matches design specifications original design is correct Surrendering these one by one leads us to consider stuck-ats, then bridges, then power iosi;, etc. We have significant work yet to do in determining a more complete and correct list, and in determining the consequences of each assumption on the diagnostic process. But we feel this is a key to creating more interesting and powerful diagnos?ic reasoners. 7. Remember, we said it was important to have one. 141 References [l] Batali J, Hartheimer A, The design procedure language manual, MIT Al Memo 598, Sept. 1980. [2] Bell G, Newell A, Computer Structures: Readings and Examples, McGraw-Hill, 1971. [3] Breuer M, Friedman A, Diagnosis and Reliable Design of Digital Sysrems, Computer Science Press, 1976. [4] Brown J S, Burton f?, deKleer J, Pedagogical and knowledge enginesring techniques in the SOPHIE systems, Xerox Report CIS-14, 1981. [5] Davis R, Expert systems: Where are we and where do we go from here? AAAl Magazine, summer 1982. [6] deKleer J. Local methods for localizing faults in electronic circuits, MIT Al Memo 394, 1976. [7] Genesereth M, The use of hierarchical models in the automated diagnosis of computer systems, Stanford HPP memo 81-20, December 1981. [8] Patil R, Szolovits P, Schwartz W, Causal understanding of patient illness in medical diagnosis, Proc. L/CA/-B 1, August 1981, pp 893 899. [9] Shortliffe E, Computer-Based Medical Consultations: Mycin, American Elsevier, 1976. [lo] Sussman G, Steele G, Constraints - a language for expressing almost-hierarchical descriptions, A/ Journal, Vol 14, August 1980, pp l-40. 142
1982
49
144
EXPERT SYSTEMS A USER'S PERSPECTIVE OF SOME CURRENT TOOLS ABSTRACT Susan P. Ennis Amoco Production Company, Research Center 4502 East 41st Street, P.O. Box 591 Tulsa, OK 74102 This task is a good system: candidate for an expert Mineralogists handle the analysis well, Spectral problems have been solved with AI techniques [l], The system can be verified by running extensive tests on selected samples, The expert was available and willing to devote his time, and The correct analysis of rock samples is important. Textbooks on x-ray diffraction present an algorithm for solving this problem [3]. This algorithm utilizes the characteristic spectrum for each mineral, which describes the location and relative amplitude of each peak. A published file of characteristic spectra data is readily available [8], as are programs that apply the algorithm. The peak locations can be readily reproduced in the laboratory, but the relative intensities vary considerably from the published norms as a function of the laboratory procedures. Control of these procedures for running several hundred rock samples per week is prohibitively expensive. Expert mineralogists understand the vagaries of the data and apply the algorithm with heuristic modifications gleaned from experience. The basic expert process to be modeled is an iterative process of selecting the highest peak in the spectrum, matching that peak to a mineral (called the determination phase), and subtracting the contribution of that mineral from the spectrum (called the reduction phase). The loca- tions of the major and minor peaks of the min- erals of concern are often very close or even coincident and, in addition, minor peaks of sev- eral minerals may combine to form the current highest peak. - The knowledge statements, notably the rules, can be read and understood by non- computer-oriented domain experts, l It is easy edge base, to modify and expand the knowl- l The systems can explain their reasoning -- both during the running of the system (via a trace) and after the final conclusions have been made (explain), and l Only knowledge needs to be entered into the system. If rules are necessary to control the questioning or rule execution, then these "control rules" can be sepa- rated from the "domain rules." A major objective was to create a distinct sub-set of rules for the mineralogist to examine and expand. In particular, the mineralogist did not want to be concerned either with the control of the iterative process or with some of the data manipulation involved. The systems are also rated by the ease with which application-specific extensions can be made since all of them are undergoing active development and it is unlikely that a single system will ever include all of the possible functions for each application. From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. Because of time limitations, knowledge-base construction ceased as soon as it was shown that the system could handle the problem and give cor- rect results; the extensions to the various tools were not hardened for production use. III THE TOOLS -c__ A. EXPERT For various reasons, primarily related to hardware availability, the problem was first approached using EXPERT [7]. This effort, lasting about one month, demonstrated several things. It showed that mineral determination rules could be written and it showed that the determination-reduction cycle must be run itera- tively to prevent minor peaks of several minerals from combining to give false indications of another mineral. The attempt to use EXPERT was abandoned because EXPERT does not allow parameter values to be changed once they are set and each possible spectral peak had to be named in advance. It should be noted that no attempt was made to work with the developers to extend EXPERT to handle these problems. B. LISP At this point a system was written in LISP to perform the task. The system is domain- specific with much of the control and very gener- alized knowledge embedded. However the rules to perform the determination and reduction have an English-like syntax, the run can be traced, and the system can explain its reasoning to the user. The inference method and the control and general- ized knowledge functions was finished in 1.5 months. A sufficiently 'user friendly' interface was completed in another two months and the entire system was given to the mineralogist for refinement of the rule base. The mineralogist was able to process data interactively, examine the run, change the rules using a full-screen text editor, and reprocess data using the revised rules. Two months after receiving the system, the mineralogist had made changes to the rules for most of the minerals known to the system and added rules for two additional minerals. Although he was computer-naive, he asked for help only once. He was very pleased with the capa- bility the system gave him for changing the rules without help from a programmer. C. UNITS The next system tried was Units [4], a very sophisticated data base system with inheritance along a tree structure and the capability to store rules written in a procedural language. A system that could handle four minerals was con- structed in one month. The rules were conven- iently grouped for a mineralogist to examine -- both in interactive and hard-copy form. The con- trol rules were well separated from the domain rules. The primitive nature of the rule language required many statements where a more complete language would have allowed the flow of the min- eralogy rules to be improved. Units does provide a table data type that was used to store and manipulate the spectral data without the need to pre-define each possible peak. There is a dearth of generalized relations and functions. New data types can be defined by the Knowledge Engineer (KE) in a very structured manner. However, the necessary relations and functions for using these data types require that existing system code be modified. One of the existing functions was more than 600 lines in PrettyPrint format. Units has no explanation facilities and poor run-time tracing facilities. Limited procedural attachment is allowed, though the system documentation is misleading on how to activate it. Units is written in INTERLISP which is upper/lower case sensitive. The KE interface does not always protect the KE from entering objects in the incorrect case. On the other hand, all of the messages generated for the user running the system are in upper case. It does offer a reasonable mechanism to separate parts of a problem, if the problem is separable. D. EIWCIN EMYCIN [5,6] could not handle the iteration required by the problem nor could it represent the spectral data without naming each possible peak. The developers added a single function to handle the iteration problem and a new data type (array) with its concomitant functions to handle the spectral data. A mutually cooperative debug- ging effort allowed the construction of a system able to handle four minerals in three weeks. Several application-specific functions were added with ease. The control rules separate nicely from the domain rules, though there is not an easy way to get a separated listing of the rules. The trace facility was adequate. Because of the heavy dependence on the array data type and the rushed implementation of that tn.=, the Question/Answer module (explanation facility) of EMYCIN was not used in debugging the system. The KE interface is extremely good. Although written in INTERLISP, the system handles the upper-lower case problem well. The EMYCIN system maintains a very large network of poin- ters. This ability of the system to cross- reference almost anything to anything else was invaluable for debugging the knowledge base. E. OPS-5 - -. The system was constructed under OPS-5 [2] in 1.5 weeks -- for only three minerals, but enough to show that it could work. Functions can be added in a straight-forward manner, although expansion of the documentation by a few examples would assist a new KE. Relations cannot be added and the rules can become rather complex. Cross- referencing of rules (productions in OPS termi- nology) and parameters is not supported. OPS offers splendid opportunities to the KE who wants to handle his own control, because the OPS control is extremely primitive. This leads to control and knowledge being interspersed in almost every production. The developers contend that the control is part of the knowledge and in a sense they are correct. However valid this assertion may be from an Artificial Intelligence viewpoint, it frustrated the attempt to separate control and domain knowledge. It should be rather simple to write a domain-specific interface that would give the appearance of a separation of domain and control rules. IV CONCLUSION The x-ray problem was implemented on the tools in the order presented. The generally decreasing time to implement the system on each new tool would probably have occurred independent of order. This study did show that a knowledge base can be re-structured to run with various tools in a short time period. The tools do not, in general, provide what industry has been lead to expect. This is a shortcoming of the current implementations, rather than the basic tech- nology. A large amount of work remains to be done before these research tools can be used effectively by industry. The benefits to those who persevere are potentially large. A recommended approach is to select an appropriate tool and generate a proto- type system. With luck the tool will be capable of supporting the full system, and with coopera- tion from the developers most of the tools can be expanded to support the full system. At worst the KE will have generated the specifications for a domain specific LISP-based system. Each and every industrial application -- if its conception problems are reported -- will further the devel- opment of tools suitable for industrial usage. ACKNOWLEDGMENTS To Amoco Production Company for providing the opportunity for me to do this study. To James C. Hoffman (Amoco) for developing the min- eralogy rules. To Ellen Bailey and Jay Tompkins (Amoco) for assisting in the construction of the LISP system. To those who helped with the var- ious tools: S. M. Weiss (Rutgers) for EXPERT, Peter Friedland (Stanford and Teknowledge) for Units, Jim Bennett and Carli Scott (Stanford and Teknowledge) for EMYCIN, and C. L. Form (Carnegie-Mellon) for OPS-5. To Jim Bennett and Jay Tompkins for reviewing drafts. [ll 121 131 141 [51 161 [71 [81 REFERENCES Buchanan, B. G. and E. A. Feigenbaum, "DENDRAL and Meta-DENDRAL." Artificial Intelligence 11:l (1978) 5-24. ---- .--- Forgy, C. L., "OPS-5 User's Manual." CMU-CS-81-135, Dept. of Computer Science, Carnegie-Mellon University, July, 1981. Klug, H. P. and L. E. Alexander, X-ray -- Diffraction Procedures. New York, John --____ Wiley & Sons, Rev. Ed., 1974. Smith, R. G. and P. Friedland, "A Users Guide to the Unit System." Heuristic Programming Project Memo HPP-80-28, Stanford University, December, 1980. van Melle, W., A domain-independent system -- that aids in _-- I_ - constructing knowl%e-based consultation programs. PhD dissertation, ComputerScience Dept., Stanford University, June, 1980. van Melle, W., A. C. Scott, J. S. Bennett, and M. A. Peairs, "The EMYCIN Manual." Heuristic Programming Project Report HPP-81-16, Stanford University, November, 1981. Weiss, S. M. and C. A. Kulikowski, "EXPERT: A System for Developing Consultation Models." In Proc. IJCAI-79. August, 1979, pp. 942-9477 Tokyo, Japan, Joint Committee on Powder Diffraction Standards, Mineral Powder Diffraction File. -- - International Center for Diffraction Data, Swarthmore, PA. 321
1982
5
145
REVEREND BAYES ON INFERENCE ENGINES: A DISTRIBUTED HIERARCHICAL APPROACH(*)(**) Judea Pearl Cognitive Systems Laboratory School of Engineering and Applied Science University of California, Los Angeles 90024 ABSTRACT This paper presents generalizations of Bayes likelihood-ratio updating rule which facilitate an asynchronous propagation of the impacts of new beliefs and/or new evidence in hierarchically or- ganized inference structures with multi-hypotheses variables. The computational scheme proposed specifies a set of belief parameters, communica- tion messages and updating rules which guarantee that the diffusion of updated beliefs is accom- plished in a single pass and complies with the tenets of Bayes calculus. Introduction This paper addresses the issue ofefficiently propagating the impact of new evidence and beliefs through a complex network of hierarchically organ- ized inference rules. Such networks find wide applications in expert-systems Cl], [2],[3],speech recognition [4], situation assessment [5], the modelling of reading comprehension [6] and judicial reasoning [7]. Many AI researchers have accepted the myth that a respectable computational model of inexact reasoning must distort, modify or ignore at least some principles of probability calculus. Conse- quently, most AI systems currently employ ad-hoc belief propagation rules which may hinder both the inferential power of these systems and their acceptance by their intended users. The primary purpose of this paper is to examine what computa- tional procedures are dictated by traditional probabilistic doctrines and whether modern require- ments of local asynchronous processing render these doctrines obsolete. We shall assume that beliefs are expressed in probabilistic terms and that the propagation of beliefs is governed by the traditional Bayes trans- formations on the relation P(DIH), which stands for the judgmental probability of data D (e.g., a tom- bination of symptoms) given the hypothesis H (e.g., the existence of a certain disease). The unique (*)The paper "An Essay Towards Solving a Problemin the Doctrine of Chances by the late Rev. Mr. Bayes", Phil. Trans. of Royal Sot., 1763,marks the begin- ing of the science of inductive reasoning. (**) Supported in part by the National Science Foundation, Grant IST 80 19045. feature of hierarchical inference systems is that the relation P(DIH) is computable as a cascade of local, more elementary probability relations in- volving intervening variables. Intervening vari- ables, (e.g., organisms causing a disease) may or may not be directly observable. Their computation- al role, however, is to provide a conceptual summarization for loosely coupled subsets of obser vational data so that the computation of P(HID)can be performed by local processes, each employing a relatively small number of data sources. The belief maintenance architecture proposed in this paper is based on a distributed asynchro- nous interaction between cooperating knowledge sources without central supervision similar to that used in the HEARSAY system [4]. We assume that each variable (i.e., a set of hypotheses) is repro sented by a separate processor which bothmaintains the parameters of belief for the host variable and manages the communication links to and from theset of neighboring, logically related variables, The communication lines are assumed to be open at all times, i.e., each processor may at any time inter- rogateits message-board forrevisionsmade by its neighbors,update its own belief parameters and post newmessages on its neighbors' boards. In this fashion the impact of new evidence may propagate up and down the network until equilibrium is reached. The asynchronous nature of this model requires a solution to an instability problem. If a stron- ger belief in a given hypothesis means a greater expectation for the occurrenceof a certain sup- porting evidence and if, in turn, a greater cer- tainty in the occurrenceof that evidence adds further credence to the hypothesis, how can one avoid an infinite updating loop when the two processors begin to communicate with one another? Thus, a second objective of this paper is to pre- sent an appropriate set of belief parameters, communication messages and updating rules which guarantee that the diffusion of updated beliefs is accomplished in a single pass and complies with the tenets of Bayes calculus. A third objective is to demonstrate that pro- per Bayes inference can be accomplished amongmulti- valued variables and that, contrary to the claims made by Pednault, Zucker and Muresan [8], this does not render conditional independence incompa- tible with the assumption of mutual exclusivity and exhaustivity. 133 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. Definitions and Nomenclature A node in an inference net represents a vari- able name. Each variable represents a finite par- tition of the world given by the variable values or states. It may be a name for a collection of hy- potheses (e.g., identity of organism: ORG1, ORG2, . . . . . ) or for a collection of possible observations (e.g., patient's temperature: high, medium, low). Let a variable be labeled by a capital letter,e.g., A,B,C ,-*., and its various states subscripted, e.g., A1,A2,... . An inference net is a directed acyclical graph where each branch @ - @ represents a family of rules of the form: if Ai then Bi. The uncertainties in these rules are quantified by a conditional probability matrix, I(BIA),with entries: M(BlA)ij= P(BjlAi). The presence of a branch be- tween A and B signifies the existence of a direct communication line between the two variables. The directionality of the arrow designates A as theset of hypotheses and B as the set of indicators or manifestations for these hypotheses. We shall say that B is a son of A and confine our attention to trees, where every node has onlyonemulti-hypothe- ses father and where the leaf nodes represent observable variables. In principle, the model can also be generalized to include some graphs (multiple parents), keeping in mind that the states of each variable in the tree may represent the power set of multi-parent groups in the corresponding graph, Structural Assumptions Consider the following segment of the tree: The likelihood of the various states of B would, in general, D depend on the entire data observed so far, k/ i.e., data from the tree A rooted at B, the tree B rooted at C and the tree C above A. However, the ' 4i42 'I F fact that B can communi- E cate directly only with Y , , * t its father (A) and its ; '4 sons (F and E) means that the influence of the en- tire network above B on B is completely summarized by the likelihood it induces on the states of A. More formally, let Dd(B) stand for thedataobtained fromthetree rooted atB,and D"(B) for the data ob- tained fromthe network above B. The presenceofonly onelinkconnectingDU(B)and (B)implies: P(BjlAi,DU(B)) = P(BjlAi) (1) This structural assumption of local communication immediately dictates what is normally called "Con- ditional Independence"; if C and B are siblings and A is their parent, then P(Bj,CkIAi) = P(BjIAi) * P(CkIAi) (2) because the data C=Ckis part of D"(B) and hence (7) implies P(Bj/Ck,Ai) = P(BjlAi), from which (2)follows. Note the difference between the weak form of conditional independence in (2) and the over- restrictive form adapted by Pednault et al. [8], who also asserted independence with respect to the complements Ai. Combining Top and Bottom Evidences Our structural assumption (1) also dictates how evidences above and below somevariable B should be combined. Assume we wish to find the likelihood of the states of B induced by some data D, part of which, D"(B), comes from above B and part, Dd(B), from below. Bayes theorem, together with (l),yields the product rule: P(BilDU(B),Dd(B))=UPCDd(B)IBiI*PIBiIDu(B)I, (3) where a is a normalization constant. This is a generalization of the celebrated Bayes formula for binary variables: O(HI E) = X(E) O(H) (4) where A(E)=P(E[H)/P(EIn) is known as the likelihood ratio, and O(H)=P(H)/P(fl) as the prior odds [2]. Equation (3) generalizes (4) in two ways. First, it permits the treatment of non-binary vari- ables where the mental task of estimating P(EIR) is often unnatural, and where conditional independence with respect to the negations of the hypotheses is normally violated (i.e., P(El,E21R)fP(El/R)P(E2l~)). Second, it identifies a surrogate to the prior probability term for any intermediate node in the tree, even after obtaining some evidential data. According to, the multiplicative role of the prior probability in Equation (4) is taken over by the conditional probability of a variable based only on the evidence gathered by the network above it, excluding the data collected from below. Thus, the product rule (3) can be applied to any node in the network, without requiring prior probability assessments. The root is the only node which requires a prior probability estimation. Since it has no net- work above, D"(B) should be interpreted as the available background knowledge which remains unex- plicated by the network below. This interpretation renders P(BilD'(B)) identical to the classical no- tion of subjective prior probability. The proba- bilities of all other nodes in the tree are unique- ly determined by the arc-matrices, thedataobserved and the prior probability of the root. Equation (3) suggests that the probability distribution of every variable in the networkcan be computed if the node corresponding to that vari- able contains the parameters x(Bi) a, P(Dd(B)IBi) (5) and q(Bi) 4 P(BilD'(B)). (6) q(Bi) represents the anticipatory support attributed to Bi by its ancestors and X(Bi) represents the evidential support received byBi from its diagnostic descendants. The total strength of belief in Bi 134 would be given by the product P(Bi) = aX(Bi) q(Bi). (7) Whereas only two parameters, x(E) and O(H),were sufficient for binary variables, an n-state variable needs to be characterized by two n-tuples: Propagation of Information Through the Network Assuming that the vectors h and 9 are stored with each node of the network, our task is now to prescribe how the influence of new information spreads through the network. Traditional probabi- lity theory, together with some efficiency consi- derations [9], dictate the following propagation scheme which we first report without proofs. 1. Each processor computes two message vectors: P and r. P is sent to every son while r is deliv- ered to-the- father. The message p is identical to the probability distribution of the sender and is computed from h and 4 using Equation (7). r is computed from x using the matrix multiplication: - r=M=aA (8) - - where fi is the matrix quantifying the link to the father. Thus, the dimensionality of r is equal to the number of hypotheses managed by the father. Each component of r represents the diagnostic con- tribution of the data below the host processor to the belief in one of the father's hypotheses. 2. When processor B is called to update its parameters, it simultaneously inspects the P(A) message communicated by the father A and the mes- sages g,~2, . . . , communicated by each of its sons and acknowledges receiving the latter. Using these inputs, it thenupdates h and 4 as follows: - 3. Bottom-up propagation: h is computed using a term-by-term multiplication of the vectors ~1, 9, . . . : X(Bi) = (VJ)i X (Q)i X ... = n(ck)j (9) k 4. Top-down propagation: 4 is computed using: q(Bi) = B 1 P(BilAj)P(Aj)/(~‘)j j (10) where B is a normalization constant and r' is the last message from B to A acknowledged by-the father A. (The division by c' amounts to removing from P(A) the contribution due to Dd(B) as dictated by The definition of q in Equation (6)). 5. Using the updated values of 1 and 4, the messages c and r are then recomputed as in step 1 and are posted on the message-boards dedicated for the sons and the father, respectively. This upda- ting scheme is shown schematically in the diagram below, where multiplications and divisions of any two vectors stand for term-by-term operations. The terminal nodes in boundary conditions. Here between the two cases: CURRENTMES%lCE TO ALL SONS the tree require special we have to distinguish 1. Anticipatory node: an observable variable whose state is still unknown. For such variables, therefore, we should set 2. Data-node: an observable variable with a known state. Following Equation (5), if the jth state of B was observed to be true, set x = (O,O...O,l,O...) with 1 at the jth position. Similarly, the boundary conditions for the root node is obtained by substituting the prior proba- bility instead of the message P-(A) expected from the father. A Token Game Illustration Figure 2 shows six successive stages of belief propagation through a simple binary tree, assuming that updating is activated by changes in the belief parameters of neighboring processes. Initially (Figure 2a), the tree is in equilibrium and all terminal nodes are anticipatory. As soon as two data nodes are activated (Figure 2b), white tokens are placed on their links, directed towards their fathers. In the next phase, the fathers, activated by these tokens, absorb the latter and manufacture the appropriate number of tokens for their neighbors (Figure 2c), white tokens for their fathers and black ones for the children (the linksthroughwhich the absorbed tokens have entered do not receive new tokens, thus reflecting the division of P by rl), The root node now receives two white tokens, one from each of itsdescendants. That triggers the production of two black tokens for top-down deliv- ery (Figure 2d). The process continues in this fashion until , after six cycles, all tokens are absorbed and the network reaches a new equilibrium. 135 Figure 2 Properties of the Updating Scheme 1. The local computations required by the pro- posed scheme are efficient in both storage andtimp. For an m-ary tree with n states per node, each pro- cessor should store n2+mnt2n real numbers, and per- form 2n2tmn+2n multiplications per update. These expressions are on the order of the number of rules which each variable invokes. 2. The local computations are entirelyindepen- dent of the control mechanism which activates the updating sequence. They can be activated by either data-driven or goal driven (e.g., requests for evidence) control strategies, by a clock or at random. 3. New information diffuses through the net- work in a single pass. Infinite relaxations have been eliminated by maintaining a two-parameter sys- tem (4 and r) to decouple top and bottom evidences. The time required for completing the diffusion (in parallel) is equal to thediameterof the network. A Summary of Proofs From the fact that X is only influenced by changes propagating from the bottom and 9 only by changes from the top, it is clear that the tree will reach equilibrium after a finite number of up- dating steps. It remains to showthat,atequilibrium, the updated parameters P(Vi), in every node V, corre- spondtothecorrectprobabilities P(VilDU(V),Dd(V)) or (see Equation (3)),thatthe equilibriumvalues of h(Vi) and q(Vi)actuallyequal the probabilities P(Dd(V)IVi)and P(VilD'(V)) This can be shown byinduc- tion bottom-up for&and then top-down for 4. Validity of A: x is certainly valid for leaf nodes, as was explained above in setting the boun- dary conditions. Assumming that theX's are valid at all children of node B, the validity of x(B) computed through steps (8) and (9) followsJirectly from the conditional independence of the data be- neath B's children (Equation (2)). Validity of q: if all the X's are valid, then P is valid for the root node. ‘iissuming now that P(A) is valid, let us examine the validity of q(B), where B is any child of A. By definition (equation (6)), q(B) should satisfy: where S denotes the set of B's siblings. The sec- ond factor in the summation differs from P(Aj) = P(AjjD'(A),Dd(A)) in that the latter has also in- corporated B's message (r')j in the formation of bility ensues. When we divide P(Aj) by (lo), the correct proba- Conclusions The paper demonstrates that the centuries-old Bayes formula still retains its potency for serving as the basic belief revising rule in large, multi- hypotheses, inference systems. It is proposed, therefore, as a standard point of departure formore sophisticated models of belief maintenance and inexact reasoning. m PI c31 c41 [51 [61 [71 I31 L-91 REFERENCES Shortliffe, E.H., and Buchanan,B.G.,"AModel of Inexact Reasoning in Medicine". Math.Biosci., 23 (1975), 351-379. Duda, R.O., Hart, P.E. and Nilsson, N. J., "Sub- jective Bayesian Methods for Rule-Based Infer- ence Systems". Tech. Note 124, AI Center, SRI International, Menlo Park, CA; also Proc. 1976 NCC (AFIPs Press). Duda, R., Hart, P., Barrett, P., Gashnig, J., Konolige, K., Reboh, R. and Slocum J., “Devel- opment of the Prospector Consultation System for Mineral Exploration". AI Center, SRI In- ternational, Menlo Park, CA, Sept. 1976. Lesser, V.R. and Erman, L.D., "A Retrospective View of HEARSAY II Architecture". Proc. 5th Int. Joint Conf. AI, Cambridge, MA,l977, 790-800. DDI Handbook for Decision Analysis, Decision and Design Inc., McLean, VA, 1973. Rumelhart, D.E., "Toward an Interactive Model of Reading". Center for Human Info. Proc.CHIP- 56, UC La Jolla, March 1976. Schum, D. and Martin, A., "Empirical Studies of Cascaded Inference in Jurisprudence: Methodo- logical Consideration". Rice Univ., Psychology Research Report, #80-01, May 1980. Pednault, E.P.D., Zucker, S.W. and Muresan, L.V., "On the Independence Assumption Underlying Subjective Bayesian Updating". Art. Intel., Vol. 16, No. 2, May 1981, 213-222. Pearl, J., "Belief Propagation in Hierarchical Inference Structures". UCLA-ENG-CSL-8211, UC Los Angeles, January 1982. q(Bi)=P(BiID'(B))= CP(BilAj)P(AjID'(A),Dd(S)) j 136
1982
50
146
CONSISTENT-LABELING PROBLEMS and their ALGORITHMS Bernard Nude1 Dept. Computer Science, Rutgers University. New Brunswick, New Jersey 08903. ABSTRACT Two new classes of theories have been developed giving the expected complexities of three Consistent- Labeling Problem (CLP), or Constraint-Satisfaction, algorithms: Backtracking, Forward Checking and Word-wise Forward Checking. Apart from giving the exact expected complexity for these algorithms for the underlying CLP distribution and domain, these theories provide useful approximations for the complexity of solving essentially any individual CLP. Given this, and the fact that the theories can reflect changes in complexity due to changes in the ordering of variables used in the search, these theories have the potential to afford significant savings for any individual CLP, by predicting, prior to search, good orderings for use in solving that CLP. We are concurrently developing improved CLP algorithms based on this and similar ordering effects. I INTRODUCTION The consistent-labeling problem (CLP) is an NP- complete problem (see [ 11) of broad relevance in Artificial Intelligence and Operations Research. In [2] Haralick shows the breadth of this class of problems. (We also refer the reader there, and to [3], for references to the literature on CLP algorithms.) Binary CLPs, on which we concentrate, are seen to be a significant subclass, including graph and subgraph isomorphism detection, graph coloring, boolean satisfiability, and packing problems, as well as specialized cases of several other important problems. Analytically, binary CLPs provide a useful special (yet far from trivial) case from which to pursue results for general CLPs. Unless stated otherwise, the term CLP will refer to a binary CLP. In [3] Haralick presents, for two different algorithms, an analysis of the expected complexity of obtaining all solutions of a CLP, for CLPs arising according to a certain one parameter probability distribution. This distribution is used by Haralick in generating random CLPs for an empirical comparison of seven different algorithms. We call these level -0 theories. For the CLP distribution they assume, they provide exact expected values.’ However, we have found that these theories can be interpreted as providing approximate, but quite accurate, estimates for certain individual CLPs. In fact, by generalizing his analyses to apply to a more richly parameterized distribution, giving ‘Apart from a subtle error that we have found, which is beside the point at this stage. See section IV.B below. level-l theories, we can obtain good estimates for any individual CLP solved by the corresponding algorithm leg. fig IV- 1). A significant aspect of these generalized theories (as opposed to the level-0 theories) is that they are able to capture for a given CLP, the effects on algorithmic complexity resulting from a change in instantiation-order or consistency-checking order used during solution leg. fig. IV-2). The theories can thus lead to significant savings by providing a basis for an intelligent choice of those orderings. We are in fact, complimentary to our mathematical analyses, developing improved CLP algorithms capable of exploiting such order-dependent effects [4, 71. Empirically, word-wise for bit-parallel) Forward Checking (wFC) was found best amongst the seven algorithms Haralick tested, with Forward Checking (FC) second best. Haralick obtained level-0 theories for Backtracking (BT) and Forward Checking. In [5] we obtain level- 1 theories for BT and FC as well as for wFC (whose level-0 form does not appear in [3]1. Level-2 theories, that improve on their level- 1 counterparts, are obtained in [6] for FC and wFC; though superseded in accuracy, level- 1 theories remain useful in being more manageable analytically for deriving subsequent results, in particular analytically justified ordering heuristics. As representative of our work on FC, wFC and BT, we present here analytic and empirical results for FC only. Theory-O, theory-l and theory-2 will be used below to denote the corresponding level theories for FC. We present theory- 1, indicating its similarity to theory-2, and present experiments showing the accuracy of both theories in estimating, for individual CLPs, the complexity of solution, as well as in estimating the optimal orderings to use. A fuller account of our work is to be published in 171. II CONSISTENT-LABELING PROBLEMS A general consistent-labeling problem is characterized by a finite list2 of n variables; each variable, v, or simply i, having an associated finite domain, from which it can take any of m, values or labels. Constraints exist on which values are mutually compatible for various subsets of the n variables. The goal is to find one or more sets of assignments of all n variables to values in their corresponding domains, such that for each assignment set all constraints are simultaneously satisfied. If the constraints exist between some pairs of variables, but not 2We consider both the variables and their domains to be ordered. 128 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. between any triples or larger subsets of the n variables, then we have a binary consistent-labeling problem. Binary CLPs can be represented by their relations matrix CR?,], a bit-matrix such that R& = 1 iff the k-th value for variable i is consistent with the I-th value for variable j. Otherwise bit I?;,], = 0. Symmetries in [R$] allow us to restrict our attention to the N E ( “2 1 component relation3 matrices CR;{‘] of the relations matrix, for which i’ < j’. For later use, we define M,j z m,m, , and m to be the vector Cm, m2 . . m,] of domain sizes. We consider two CLPs to be equivalent iff their two relations matrices are identical. Then, in the class K( n m ), of CLPs on n variables, the i-th having domain size m, , there are n,<j 2”~, different CLPs. The number of ones in [Rk{‘] we call the compatibility-count, I,,,, , for variables i’ and j’, and equals the number of ways that these two variables can be compatibly assigned values in their respective domains. A CLP’s matrix [I~,], of compatibility- counts (with Ikk undefined), we call its cc-matrix; the matrix Cr,,l of compatibility-ratios or simply compatibilities, r,, E I,, / M,, , we call its cr-matrix. Note that r,, l ( 0 1 /M,, 2/M,, . . . 1 1, in contrast to their counterparts, p,j , to be defined later. CLPs in K( n m 1 having the same cc-matrix [I,,] (and hence cr-matrix) are compatibility equivalent, and call into the same compatibility equivalence class or c-class, denoted K( n m [l,jI 1 or Ki Cl,,] 1 when n and m are understood. The importance of these c-classes is explained in section III. There are n,,, [ M,, + 1 I different c-classes for a given n and m, and the number of dtf ferent CLPs in such a class for a given cc-matrix, [Ii,], Haralick [3] considers CLPs of K( n m ), with mi = m for all i, to arise such that the probability of compatibility of any two labels for any two variables is a fixed value, p. We make use of a natural generalization of this, where a separate probability, p,, , is allowed for each of the N ways to pair variables. We also generalize to allow non- equal domain sizes mi The corresponding probability distribution for individual CLPs, of cc-matrix Clij], in K( n m ) is then P( CLP 1 Cp,,l 1 = ,Il, p,,‘ij ( i - p,, j”lj-‘lj Expected values of CLP-dependent quantities over all K( n m ), when CLPs arise according to this CLP- distribution, we call distribution averages, as opposed to c-class averages which we will be using them as estimates for. How and why, are explained in the following section. Ill OUR ESTIMATION STRATEGY What we really want to achieve is GOAL A: Obtain analytic expressions for f,(CLP) k = 1, 2, 3, being the exact values respectively, for a given CLP, of its number of solutions, of nodes in its search tree and of consistency checks carried out in the search; the latter two are algorithm dependent, the last being a true indication of the algorithm complexity [3] Exact values for any given CLP, 3Primed indices denote fixed though unspecified values. seem an unrealistic goal. However, we have found that the spread for each of these quantities over CLPs within a c- class, is quite small; for k = 2 and 3, c-class standard deviations are about 5% to 10% of the class average values. We have thus aimed for the averages over a c- class, taking CLPs as equally likely within a class, as estimates for a given CLP of that class. If CLPs of a class are equally likely in practice, then this class average minimizes the sum, over CLPs of the class, of square errors from the actual values. In this sense it is the best estimate for a given CLP as a function of its [lij] matrix only. In any case, because of the homogeneity amongst CLPs in a class, any distribution used would lead to essentially equally good estimates for individual CLPs in the class - the expected value of say, 5.1, 5.0 and 4.9 is near to 5.0, and hence to any of the three numbers, independent of the weights we assign to these numbers. We have then GOAL B: In order to approximate f,(CLP) of Goal A, obtain analytic expressions for that CLP’s c-class averages F,( Cl,,] 1 = C f,(CLP) / 1 K( El,jI 1 1 CLPEK([ I,, 1) In [6] and [7] (for FC and wFC) we present exact expressions, constituting our level-2 theories, for these class averages. However, our chronologically earlier level- 1 theories (for FC, wFC and BT) provide quite accurate approximations for them. It is in obtaining these level- 1 theories that the generalized CLP generation model of the previous section provides a tool.4 Replacing Haralick’s single parameter, p, by a matrix of parameters, [Pi,] t provides sufficient structure so that the distribution average E( f,(CLP) 1 CPljI 1 E C f,(CLP) p( CLP 1 Cpljl 1 CLPEK( n m ) of a CLP-dependent quantity, f,(CLP), can provide a good estimate of that quantity’s c-class averaged value, F,( [l,j] ), for any given c-class; providing we know how to choose the distribution parameters, Cp,,], appropriately for that class. In [S] we present several reasons why a good parameter-matrix choice for approximating a class average P( Cl,,] 1 , is LpijI = Cr,jI = [ Ii, / Mi, 1. Given this, we then have our final GOAL C: Obtain analytic expressions for the distribution averages E( f,(CLP) 1 CPijI 1. Then as approximations for the c-class averages F,( [I,,] ), of Goal B, use E( f,(CLP) I [r,,] ). Haralick’s level-0 results provide these distribution averages for the [p,j] = [p] case.5 We have found them good estimates for averages over c- classes having uniform [r,,] = [r], when p = r is used in his level-0 theories. We generalized to include non-uniform [p,,] so as to obtain level- 1 theories that are equally accurate for classes with a non-uniform cr-matrix [r,,]. 4Here, unlike in [3] where it models an actual experiment, it is a technical device only, for giving distribution averages that approximate c-class averages. Its use for this purpose implies no assumptions about the CLP distribution occuring in practice. 5We use [x] to indicate elements are equal to x. a matrix all of whose (defined) 129 IV FORWARD CHECKING - EXPECTED COMPLEXITY A. Nomenclature The instantiation order, F, 5 ( i, i2 . . i, 1, denotes the g/obally fixed6 order in which the n problem variables are assigned a value during the search. All nodes at level k of the tree have the same assigned variables A, = ( i , i, . . . i,_, i, 1. Variables that remain to be instantiated after level k, make up the list of future variables for that level F, = ( i,, , ik+2 . . . i, I. For added generality we allow an independently specifiable consistency-checking order GG E ( i’, ii . . . i; 1, according to which, future variables f E F, are to be selected for forward checking at any node of the search tree; at level k they are chosen in the order of the list G, , defined as F, reordered according to G,. The j-th element of the list G, is denoted g,k . In the theory below, products and sums over zero number of terms are to be taken as resulting in 1 and 0 respectively. B. Analytic Results With the above definitions, we now present our level- 1 theory for FC (theory- 11, giving the distribution averages of f,(CLPI k = 1, 2, 3. (See [S] for the derivation by a direct generalization of that in 131. In [6] and [7] an alternative recursive derivation is given that has the advantage of unifying both level- 1 and level-2 theories.) The expected number of solutions is S of eqn. ! 1). This is not a function of the algorithm used but other results here are specific to the FC algorithm. The expected number of nodes that appear at level k of the FC algorithm’s search tree is nk of eqn. (2). SC:) = 1 - Wtk) is the Survival probability of variable f when the first kf variables of F, have been instantiated, expressed in terms of the f variable’s Wipe-out probability: Wik) = ( 1 - m, Pf, Irnf . s = ! I/ mi 1 ( II p,, 1 ’ k 11) I<J nk = s:“- ‘) 1 f mk = (2) (41 (5) The expected number of labels for future variable f, before forward checking at level k is ml of eqn. (31, in terms of which the expected number of consistency- checks per node, during forward checking at the k-th level, is ck of eqn. (4). The expected (total) number of consistency checks during forward checking at the k-th level is then given by: Ck = “k ck. The Sum, c, Of Ck Over 6The algorithms we are developing have orderings that are dynamically decidable at each node. However a globally fixed order 1 acts as default in case of decision ties. It is a good choice for this default that we hope to arrive at through our analyses. the n levels of the tree is an appropriate unit for the expected complexity of FC [3]. The relation between Haralick’s level-0 theory for FC and our level-l theory above requires that our results reduce to his when all p,j = p and all m, = m. This is in fact the case with the exception of expression (4) for ck. This difference is due to a subtle error we have found in [3], so that Haralick’s ck is the specialized form of eqn. (4) with the denominator term, Sk- ‘I, removed (i.e. replaced by 1). The theory-0 used beloakis the corrected version, using the specialized form of eqn. (4) as given here. Though our level-2 theory for FC [6, 71 requires a more complicated expression for the wipe-out probability Wp), we note here that apart from this, the more accurate level-2 theory is exactly as for the level- 1 theory above (but with p,, already replaced throughout by r,,, as we are required to do in fact, when applying theory- 1 for a class with cr-matrix [r,,] 1. C. Examples and Discussion Remember that the above theory- 1 gives distribution- average values for CLPs arising according to the generalized model of section II (and as such they are exact), and these are to be used as estimates for c-class averages by setting [p,,] equal to the class cr-matrix [r,j]. Theory-2 gives these c-class average values exactly. A c-class average in turn, is intended to be a useful estimate for any individual CLP of that class. The following experiments show the usefulness of both our theory-l and theory-2 for these ends. Figure IV- 1 compares theory-l and theory-2 with experiments over all 125 c-classes for the 4096 CLPs with n = 3 and m, = 2 V i, using orderings F, = G, = ( 1 2 3 1. For each c-class, all CLPs were solved by FC. The experimental c-class average of the number of nodes/CLP (#nodes) and consistency-checks/CLP (#checks) in a problem’s search tree is shown. For the latter (only) we also indicate the corresponding experimental standard deviations for the c-classes (note that they are quite small and many are even zero!). As required, we use in theory- 1 Cp,,l = [I,, / M,j] for each of the 125 classes arising from 42, ’ 13’ 123 E ( 0 1 2 3 4 I. Figure IV-2 shows, using the 5-Queens problem and another CLP randomly selected from the same c-class, the ability of theory-l and theory-2 to reflect complexity changes resulting from changes in instantiation and consistency-check order (here F, = G,) used in the algorithm. The permutation# indicates the permutation of (1 2 3 4 5) used for these orders, where permutations are arranged in lexographic (increasing numerical) order; the 60-th being (3 2 5 4 11.’ As required, we set [p,,] to the problems’ (common) [r,j] matrix in applying theory- 1. It is possible to show generally that for n-Queens problems, [r,,] = [(n2- 3n + 21 i-j 1 ) / n2 ] ‘Results for theory-O, theory- 1, theory-2 and the 5- queens CLP are symmetric, and for CLP about nearly symmetric, “permutation# 60.5”. We there ore + discuss, results only up to permutation# 60. display, and 130 #checks and #nodes #checks l Experiment and Theory-2 #nodes x Theory-l standard-deviation bars for #checks ,- j , I c-class Figure IV-l: Experimental and theoretical c-class averages for CLPs of K( 3 [2 2 21 I, using FC. #Checks Expt for CLP, (5-Queens) C by Theory-O (p = <) C by Theory-2 C by Theory- 1 (CP,~I = Cr,,l) Expt for CLPZ (random) . 90 50 I / 60 - permutation # Figure IV-2: Variation of the #checks in the FC search tree, with variables-ordering used, for CLPs in the 5-queens’ c-class. 131 The figure shows empirical results for two individual CLPs representative of their c-class, in comparison with the theoretical results which give class averages. Agreement between theories 1 and 2 and the two individual CLPs is seen to be good, especially in regard to ranking of orders. For example, the permutation that is optimal for reducing average complexity over the c-class, is given by the exact theory-2 to be permutation# 57 or (3 2 4 1 5). The approximate theory- 1 is seen to agree. Moreover, for the individual CLPs this is in fact the optimum order for CLP,, and the 5-th best for use with CLP, (5-queens). Regarding predicting class averages: For the class average number of solutions, theory- 1 gives the same, exact result as theory-2. For #nodes and #checks in the search tree, the theory- 1 approximation seems to consistently underestimate slightly the exact theory-2 c- class averages. This underestimation varies with c-class and order(s) used, but seems to generally be small: about 5% to 10% (less than 5% everywhere in figs. IV- 1 and IV-2). Furthermore, this underestimation needn’t impare theory- l’s ability to predict optimal (w.r.t class average) orderings; as long as theory- 1 “shadows” well the variation of the exact theory-2, so that the peaks and troughs of the two occur essentially at the same orderings; Figure IV-2 and similar plots certainly show this to be the case. Regarding predicting values for individual CLPs using the class average: In preliminary experiments we have found that for the #nodes and #checks in a tree, the ratio of standard deviation over a c-class to the class average value is also around 5% to 10%. This is reflected in the standard deviations in fig. IV- 1 and by the closeness of the two CLP curves, to the exact theory-2 curve in fig. IV-2. Our partitioning scheme is thus justified as being appropriate for our aim of obtaining quite homogeneous classes so that a c-class average, in general, gives a good estimate for individual CLPs of the class. V IMPROVED CLP ALGORITHMS The ability, shown for FC in fig. IV-2, of our level- 1 and level-2 theories to capture the order-dependence of complexity for the three algorithms analysed, suggests using the theories in determining (quasi) optimal global orderings before search begins. This requires solving, or inverting, the theories to express the complexity-minimizing orders as a function of cc-matrix, rather than giving complexity as a function of orders and cc-matrix, as at present. We have not yet achieved such an inversion. However, there are several promising paths to at least an approximation method for such optimum global orderings. Until recently, we have concentrated on a simpler goal of using the theories and heuristic reasoning to suggest locally good orderings. These are built up of a sequence of choices that are individually good without ensuring the goodness of the overall orders. The theory can be used for this because each node of a tree corresponds to sub-CLP of the original CLP; a k-th level node corresponds to a sub-CLP having n-k variables, those in F, at the node, with respective domains being the values that have survived from the original domains for those variables. As a result, any local heuristics suggested by the theory for use at the root can also be applied at any node throughout the tree, using the new parameters for that node. We have obtained and tested several such heuristics for instantiation ordering and consistency-check ordering. Results are presented in c4. 71. Such dynamically-determined orderings have resulted in signif icant improvements. However, further efficiencies exist to be had using a combined local/global (dynamic/static) approach where global orderings provide good defaults when local heuristics lead to ties. Theoretical determination of good global-ordering heuristics is our next goal. ACKNOWLEDGMENTS: This work has benefited greatly from the rigorous standards of my supervisor, William Steiger, and from the encouragement of Saul Amarel (in whose course this work was first started) and Marvin Paull. 1. Haralick, R. M., Davis, L. S. and Rosenfeld, A. “Reduction operations for constraint satisfaction.” / nf ormation Sciences 74 (1978). 199-2 19. 2. Haralick, R. M. and Shapiro, L. G. “The consistent labeling problem: Part I.” IEEE Trans. Pattern Analysis and Machine Intel1 igence PAMI - 7. 2 (1979). 173- 184. 3. Haralick, R. M. and Elliot, G.L. “Increasing tree search efficiency for constraint satisfaction problems.” Artificial Intelligence 14 ( 19801, 263-3 13. 5. Nudel, B. Consistent-labeling problems and their algorithms - Part I. Tech. Rept. DCS-TR- 1 12, Computer Science Dept., Rutgers University, 1982. To appear. 6. Nudel, 6. Consistent-labeling problems and their algorithms - Part II. Tech. Rept. DSC-TR- 1 13, Computer Science Dept., Rutgers University, 1982. To appear. 7. Nudel, B. “Consistent-labeling problems and their algorithms.” Artificial Intelligence 20 (1983). Special issue on Search and Heuristics, in memory of John Gaschnig; to appear. 132
1982
51
147
EVALUATING SEARCH METHODS ANALYTICALLY* Paul W. Purdom, Jr. and Cynthia A. Brown Computer Science Department, Indiana University, Bloomington, Indiana 47405 ABSTRACT A unified approach to analyzing search algorithms is presented. Each algorithm is characterized by the types of random problems that it can solve rapidly. The results are displayed in a way that clearly indicabes the strengths and weaknesses of each algorithm. Many interesting problems can, at present, best be solved by search methods [7]. In the worst case, searching requires exponent ial time. Several clever techniques have been developed to speed up searching (see, for example, 11, 3, 6, 8, 12, 13, 14, 15, 20, 21, 221). While each of these tech- niques is clearly helpful for some class of problems, it is difficult to evaluate the importance of each method (and of each combination of methods). Analytical studies have been done on several search methods [2, 4, 5, 9, 10, 11, 171. Each analysis was parameterixed in a way that emphasized search problems for which the method under consideration had an interesting behavior. Thus, Goldberg (9, lo] studied the pure literal rule using conjunctive normal form (CNF) predicates with long clauses, while Brown and Purdom (2, 17] studied backtracking using CNF predicates with short clauses. Since each algorithm has its own region of interesting behavior, the results of the analyses are difficult to compare with each other. In this paper we describe a unified approach to analyzing search algorithms, one that indicates the strengths and weaknesses of each algorithm in a way that makes comparis- ons straightforward. \Ve first analyze the average time behavior of each algorithm on random CNF problems charac- terized by u - the number of variables, t - the number of clauses, and p - the probability a given literal is in a clause (so that the average length of a clause is 2pu ), This step is similar to the initial step of previous approaches, which con- tinued by choosing particular functions p(u) and t(v) and studying the resulting asymptotic behavior. We continue by letting p(u) and t(v) be arbitrary functions of v , and finding the asymptotic behavior of the algorithm as u approaches infinity. Finally, we find the relation between PC’(J) and t(u) that charact,erizes the boundary bet,ween exponential and polynomial average time for the algorithm. The results can be displayed with a diagram of p(u),t(u) space which shows the exponential vs. polynomial contour for each algorithm. Fig. 1 shows the results for several basic algorit,hms. Contours are drawn for ordinary backtracking i:ese.srcb reported herein was supported in part by the Nrtionkl Science Foundation ur,der grant number hfCS 7008110. where all solutions are found, using fixed search order (the results are derived in [18]); searching using the pure literal rule and a fixed search order [lo, IS]; pattern of occurrence (an algorithm developed for use with problems that have only g few clauses) [18]; and elimination of unused variables fol- lowed by exhaustive search (this paper). Fig. 2. shows the results of using t.he same approach to obtain a more detailed comparison between ordinary backt,racking and simple search rearrangement [16]. From these figures it is clear that the pure liberal rule is useful for problems with large clauses (con- straints that are easy to satisfy), that backtracking is useful for problems with many short clauses (large numbers of con- straints that are not easy to satisfy), and that specialized methods are useful for problems with a small number of clauses (constraints). As more sophisticated algorithms are analyzed, the same approach can be used to identify the types of problems for which they are most suitable. The approach is even more useful for identifying which elementary tech- niques should be combined to produce powerful general pur- pose algorit,hms. 2. The Model We compute the average running time of each algorithm using random CNF predicates. The predicates are formed using u variables, and thus 2u literals. A random clause is formed by independently selecting each literal with probabil- ity p. For each variable the probability of selecting both the positive and negative literal is pZ , so tautological clauses are common unless p is small (smaller than u -1/2 ). A random predicate is formed by independently selecting t clauses. Not,ice that for any 0 < p < 1 a/l predicates with t clauses are included in the set of model problems. If p is small then predicates with short clauses have a higher probability while if p is large then predicates with long clauses have a higher probability. 3. Results The derivat,ions for the results below are contained in the cited papers. 1Ve use the conventions: a = up(u) ; ty = (n lnv)/u for some large constant n ; and c is a small positive constant. We say f(u) < s(u) when lim f(u)/g(u) I 1 * lJ-+CCl Polynomial average time is used when: 1. Ordinary backtracking (fixed search order ) [18]: a) a 5 In 2, t 2 ( In 2 - rr)u -ln( 1 - e-(I) ’ Or b) a 1 In 2, t > ( In 2 - a)ud , where a 124 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. 2. 3. d is the largest root of !n( l-l- d)+ d(ln( I+ l/d)) = 2a. Simple search rearrangement backtracking [lS] a) same as 1, or ($rl;;mple) b) a 2: 3.5, t 2 - 2ea2 exp(2a - o) (I+ O(l)) . (See (161 for more details). Simple pure literal rule [lo, 181 P 1 E. 4. Pattern of occurrence [18]: t 5 (In In v)/(ln 3). Elimination of unused variables followed by exhaustive 5. search [this paper]: t 5 “/2P. The average number of when t2 ( In 2 - a)v -In(l - e-‘) - solutions per problem is polynomial 4. Sample Analysis To illustrate our method we give a brief analysis of Algo- rithm 5. The probability that neither literal for a variable occurs in a given predicate is (l-~)~’ . If i variables occur in a predicate the time for exhaustive search is 2’ . The pro- bability that i particular variables occur, and that the remaining u-i do not, is (l-(l-p)2’]i[(l-p)2’]“-i . The number of ways to choose i variables out of u is (:!) . Therefore the average running time for Algorithm 5 is c z’(;)[l+-p)2’]i[(l-p)2’]“-’ = [2-(1-p)2’]” . i To obtain polynomial time, this average must be no more than u” for some n . In other words [2-(l-~)~‘]” 5 u” or ln[2-(1-~)~‘] 5 -!!. In u . Since ln[2-(l-p)2’] must be small we can use lZ[2-(l-p)2’] w l-(l-~)~’ , which gives (l-pj2’ > 1 u - ” In u or (using ln( 1-z) M 2 for small 2 ) tL II In u a ------=_ 2up 2p * 5. Concluaiond For random problem sets where p(u) or t(u) is extremely large or small there are search algorithms that solve the problems in an average time that is polynomial in the size of the problem. The time for these eztreme cases is also poly- nomial in the number of variables except when p(u) is large and t(u) is exponential or larger. Each of the algorithms 2-5 has a region of p( u),t( u) space where it is much better than any of the other algo- rithms. Algorithm 1 is as good as Algorithm 2 for some regions. A diagram such as Fig. 1 gives a useful display of the strengths and weaknesses of each algorithm. Addendum: Recently we did a more careful analysis of the pure literal rule and showed that it leads t;,~$~somial average time when t 5 nlnu and when pt 5 (- ) [19]. This is u a major improvement over the performances shown in Fig. 1. REFERENCES (l] James R. Bitner and Edward M. Reingold, “Backtrack Programming Techniques”, Comm. ACM, v. 18 (1975) pp. 651-055. [2] Cynthia A. Brown and Paul Walton Purdom Jr., “An Average Time Analysis of Backtracking”, SIAM J. Comput. 10 (1981) pp. 583-593. [3] Martin Davis and Hilary Putnam, “A Computing Pro- cedure for Quantification Theory”, JAChf, v. 7 (1900) pp. 201-21 s. II John France, “Average Analysis of the Pure Literal Heuristic”, Case Institute of Technology Report No. CES-81-4 (1981). [5] John Franc0 and Marvin Paull, “Probabilistic Analysis of the Davis-Putnam Procedure for Solving the Satishability Problem”, Case Institute of Technology Report No. CES-81-3 (June 1981). [6] Eugene C. Freuder, “A Sufficient Condition for Backtrack-Free Search”, JACM, v. 29, No. 1 (January, 1982) pp. 24-32. [7] Michael R. Garey and David S. Johnson, Computera and Intractability, W.H. Freeman and Co., San Francisco (1979). [8] John Gaschnig, “Performance Measurement and Analysis of Certain Search Algorithms”, Thesis, Carnegie-Mellon (1979). [9] Allen Goldberg, “Average Case Complexity of the Satisfiability Problem”, Proccedinge of the Fourth Workshop on Automated Deduction (1979) pp. l-6. [lo] Allen Goldberg, Paul Walton Purdom, Jr. and Cynthia A. Brown, “Average Time Analysis of Simplified Putnam- David Procedures”, Info. Proc. Letters (to appear). [ll] Robert M. Haralick and Gordon L. Elliot, “Increasing Tree Search Efficiency for Constraint Satisfaction Problems”, Report from Virginia Polytechnic Institute, 1979. [la] Robert M. Haralick and Linda G. Shapiro, “The Con- sistent Labeling Problem”, IEEETPAhU, v. 1 (1979), pp. 1773-184, v. 2 (1980) pp. 193-203. [13] Burkhard M onien and Ewald Speckenmeyer, “Three- Satisfiability is Testable in O(1.02’) Steps”, Report No. 3, Theoretical Informatics Series, University of Paderborn (1979). [14] E. T. Parker, “Computer Investigation of Orthogonal Latin Squares of Order Ten”, Proc. Sym. Appl. hiath., v. 15 (1963), Amer. Math. Sot., Providence, R.I. p. 73. [15] Paul Walton Purdom, Jr. “Solving Satisfiability Prob- lems with Less Searching”, Indiana University Computer Sci- ence Technical Report No. 117 (1981). [16] Paul Walton Purdom, Jr., “Search Rearrangement Back- tracking and Polynomial Average Time”, Indiana University Computer Science Technical Report No. 123 (1982). [17] Paul Walton Purdom, Jr. and Cynthia A. Brown, “An Analysis of Backtracking with Search Rearrangement”, SIAM J. Comput. (to appear). [18] Paul Walton Purdom, Jr. and Cynthia A. Brown, “Poly- nomial Average-time Satisfiability Problems”, Indiana Univer- sity Computer Science Technical Report No. 118 (1981). 125 [19] Paul Walton Purdom, Jr. and Cynthia A. Brown, “The Pure Literal Rule and Polynomial Average Time”, Indiana University Computer Science Technical Report No. 128 (to appear). [2O] Paul Purdom, Cynthia Brown and Edward Robertson, “Multi-Level Dynamic Search Rearrangement”, Acta Znjorma- tica v. 15 (1981) pp. 94114. [21] Thomas J. Schaefer, “The Complexity of Satisfiability Problems”, Proceedinga of the Tenth Annual ACh4 Sympoaium on Theory of Computing, (1978) pp. 216-226. [22] David Waltz, “Understanding Line Drawings of Scenes with Shadows”, in The Psychology of Computer Vision, edited by Patrick Henry Winston, McGraw-Hill, New York (1975). E 5 x Inv v In In v In 2 v Pure Literal Hard Problems 0’ / ,/- I I I I In In v v In 2 V V” In In t(v) Figure 1. ,4 diagram showing the regions of p(v),t(v) space where random CNF predicates can be solved in polynomial average time. A portion of the contour separat.ing the region of polynomial behavior from the region of exponential behavior is shown for several algorithms. The part of the space where each algorithm performs best is labelled with the name of the algorithm. The central region contains problem sets for which no polynomial average time algorithm is known. In most of this region, the problems have an exponential number of solutions, but below the line marked “solu- tions” the average number of solutions is polynomial. The region marked “pseudo-hard” contains problem sets for which the analyzed algorithms take average time exponential in the number of vari- ables but polynomial in the problem size (the typical problems are exponentially large there). 126 8 > 6 Q Polynomial I I I I I I I I I 0 1 100 IO4 t/v 106 lo6 Figure 2. A graph giving more details on the performance of backtracking algorithms. The vertical axis is pu , the average number of literals per clause. The horizontal axis is t/u , the number of clauses per variable. The curve marked Solutions separates the region where the average number of solutions per problem is exponential from where it is polynomial. The curve marked Level 0 separates the region where the average running time of ordinary backtracking is exponential from where it is polynomial. The analysis for simple search rearrangement backtracking produces only limits on its performance. The shaded region marked Level 1 separates the region where the average running time of simple search rearrangement backtracking is exponential from where it is polyno- mial. 127
1982
52
148
Synchronization of Multi-Agent Plans Jeffrey S. Rosenschein Computer Science Department Stanford University Stanford, California 94305 Abstract -_~ Consider an intelligent agent constructing a plan to be executed by several other agents; correct plan execution will often require that actions bc taken in a specific sequence. Therefore, the planner cannot simply tell each agent what action to perform; explicit mechanisms must exist for maintaining the execution sequence. This paper outlines such mechanisms. A framework for multiple-agent planning is devclopcd, consisting of several parts. First. a formalism is adopted for representing knowledge about other agents’ beliefs and goals, and is extcndcd to allow representation of their capabilities. Communication primitives are defined that allow selective acceptance of goals and facts, and an explicit means of inducing an agent to perform an act is introduced. Finally, the ordering mechanisms (consisting of sequencing operators and a planning heuristic) arc presented, along with a specific example of their use. Introduction In recent years there has been growing interest in distribufed arrificial inlclligeilce systems, collections of intclligcnt agents coopcrating to solve goals. The motivation for a distributed approach to problem solving is two-fold: increased efficiency and increased capabilities. Certain tasks, such as sensing and control of air or ship traffic, have an inherently distributed nature and so lend themselves to a distributed solution [l] [2]. Even non-sensory tasks may be inherently distributed; the knowledge required to carry out these tasks might be split among several machines. Again, distributed problem solving is a natural way to proceed. There exist two major paradigms for distributed artilicidl intclligcncc systems. The first paradigm is pfaruzirlg’ fur m&ple agerz/s, where a single intclligcnt agent constructs a plan to be carried out by a group of agents, and then hands out the pieces of the plan to the rclcvant individuals. Randy Davis calls this paradigln “distributing the solution” [3]. ‘l‘he second paradigm is distribukd problem solving, where a 3 crrolip of intclligcnt agents together construct, and possibly exccutc, the final plan. ‘I’his paper is conccrncd with the first paradigm, that of planning for multiple agents; in particular. wc examine the problem of achieving synchrony among a group of agents who will be carrying out a centrally-produced plan. Imagine, for example, intelligent agents (located on various computers) that can construct and execute plans in the operating systems domain as well as communicate with each other (this is’ the domain being explored by the Stanford Intelligent Agents project). A user might tell the agent at Stanford that he wants a file X at MIT to be printed at CMU; the Stanford agent will construct a complete plan to accomplish this goal (containing certain actions to be taken by the MIT and CMU agents), and then tell MIT and CMU what to do. The plan might involve MIT sending the file to CMU, and CMU’s printing it, but the Stanford agent must ensure that these two actions occur in this order and are not rcvcrsed. One solution is for MIT to send file X to CMU, and then notify CMU that it has been sent; CMU waits for this notification, and then prints the file. We present a method that formalizes the above solution, and thus can be used to maintain an ordering of actions performed by various agents. As a framework for the multiple-agent planning system, a formalism is adopted for representing beliefs and goals a of agents, as well as their capabilities; primitives for inter-agent communication are defined. A planning heuristic for multi-agc’nt synchrony is presented, along with its requisite operators. Finally, the example above is presented in greater detail. The Multi-Agent Formalism Beliefs and Goals To construct plans for other agents, the planner must be able to rcprcscnt and reason about their beliefs and goals. Though scvcral altcrnativcs arc possible (such as “possible-worlds” formalisms [4] [5] [6] [7]), we choose the FACT and GOAL list formalism of Konolige and Nilsson [8]. In this approach, each agent has a FACT list that contains items that it believes, including the beliefs and goals of other agents (these last two arc specified through the USC of a mctalanguage); the GOAL list contains the current goals of the agent. As an cxamplc, if A0 115 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. bclicvcs that Al bclicvcs Al has file FOO, and A0 also believes that Al has the goal of dclcting that file, the following items appear in AO’s data base: FACT(Al,‘EXIS’I’(FOO,Al)‘) GOAL(A;,‘DELETED(FOO)‘) All planning will make USC of STRIPS-like operators [9]. We allow instantiated operators to appear explicitly on any agent’s GO.4L list, rather than limiting this list to state descriptions. We differentiate between the two types of goals by calling the latter “operator-goals” and the former “state-goals”. Capabilities Previous work on multiple agents has assumed that all agents have identical capabilities, that is, that all agents have access to the identical operators. When agents are planning for differing operating system environments, this is clearly not the case. For example, an agent located on one machine may be able to run TEX on a file, while the agent on another machine that lacks TEX will not. We introduce the predicate HASCAP(agent,operator) to represent the capability of agent to carry out operalor. Generality is provided by the USC of partially instantiated operators in the HASCAP predicate. For example, if agent A0 believes that agent Al can, in general, DELETE files, the following would appear in AO’s data base: HASCAP(Al,DELETE(Al,filc)) We USC the standard convention that the free variable “file” is ufliversally quant3icd. HASCAP is also defined over more complex operator combinations; for cx~arnplc, the following axiom holds: HASCAP(agcnt,AND(opcratorl,opcrator2))* HASCAP(agent,operatorl) A HASCAP(agcnt,operator2) WIIL-PEWORM as a Prccoudition Cohen and I’crrault [lo] rccognilcd the usefulness of making an agent’s “wanting” to use an operator an explicit prccondilion of that operator. In this manner, one can get an agent to perform some action by making the action’s preconditions true, including the precondition of making the agent “want” to carry out the action. We adopt a similar strategy recast into a more general form, and in traduce the predicate WILL- PERFORM(agent,operator) to signify that agenf will perform operalor. WILL-PERFORM appears as an explicit precondition of all operators that do not occur spontaneously; so, for A0 to get Al to apply the operator OP, A0 needs to make sure WILL- PERFORM(Al,OP) is true. The following axiom says .that if an agent has the capability to perform an act and has the desire to perform the act, then he will perform it: HASCAP(agcnt,oper) A CiOAL(agent,oper) 3 WILI.-PERFORM(ngcnt,oper) Since the fact that this axiom is universally known is also known, the following axiom actually appears in every agent’s data base: FAC’T(x,‘HASCAP(agcnt,opcr) A GOAL(agcnt,oper) 3 WILL-PERFORM(agent,opcr)‘) This axiomatization of WILL-PFRFORM models an agent’s using an operator as an act of volition; if involuntary performance of acts is possible, WILL-PERFORM(agent,operator) could be made true without the agent actually possessing the operator as a goal. Other axioms would be introduced to model these cases. An agent can apply an operator once WILL- PERFORM(agcnt,opcrator) becomes true; he will not necessarily check the truth of the operator’s other preconditions or try to make them true. Given these assumptions, it is essential for the planner to ensure that WII .I,-PERFORM’s brother preconditions arc true bcforc WlI,I,-Pl~R1~Ol~M itself bccomcs true. Achieving this ordering of preconditions is identical to achieving synchrony, and will be discussccl in further detail below. Communication Primitives ‘1’0 intcgratc planning and colninllnication, WC need to adopt a coherent theory of planning communication acts thcmsclvcs. The work of Cohen and Pcrrault sheds considerable light on this issue, and we use several of their communication operators (with modification) in the work that follows. For simplicity, the initiator of a communication act will be called the “speaker,” and tic receiver will be called the “hearer.” We use four communication operators: REQUEST, CAUSE-TO-WANT, INFORM, and CONVINCE. REQUEST and INFORM are illocutionary acts, that is, they model the speaker’s communication act, but not the effect that act has on the hearer. CAUSE-TO-WANT and CONVINCE are pcrlocutionary acts, that is, they model the effects of communication acts. For example, the speaker might REQUEST some act of a hearer, but this will not directly cause the hearer to adopt that act as a goal; before the hcarcr adopts the goal, a CAUSE-‘I’O-WANT must occur. This decoupling of the communication act from its cffcct allows for natural modcling of goal or fact refusal by the hcnrer (as contrasted with Konoligc and N ilsson’s single-step “asktoachievc” and “tell” operators). While Cohen and Pcrrault make CAUSE-TO-WANT and CONVINCE trivially triggered by REQUEST and INFORM respectively, we introduce the predicates ACCEPT and BE-SWAYED as explicit preconditions on the former operators. The communication operators zire defined as follows: REQUKST(u.y,act) -- x requests y to adopt act as a goal P: WILL-PI’RFORM(x,REQUEST(x,y,act)) A: FAel’(y,‘GOf~I_(x,act)‘) The effect of REQUEST is to let y know that x has “act” as a goal; x need not believe a priori that y can satisfy “act.” CAUSE-TO-WANT(x,y,act)--x causes y to adopt act as a goal P: FAC’l’(y,‘GO/~l,(x,act)‘) A FAC’l‘(y,‘HASCAP(y,act)‘) A ACCF,P’I’(x.y,acl) A IIASCAP(y,CAUSI:-TO-WAN’l’(x,y,act)) A: GOAL(y,act) CAUSE-TO-WANT causes y to adopt x’s goal as its own, but only if y believes hc has the capability to satisfy the goal and the ACCEPT prcdicatc is true. INliORh’i(x,y,prop) -- x informs y of prop P: prop ; WIl,I,-I’t~Rt~OKM(x,lNl~~l~M(x,y,prop)) A: I:i\C’I’(y,‘FhC’l’(x,prop)‘) INFORM should only take place if prop is true; its effect is to let y know that x believes prop. The ‘I;” appearing in INFORM’s precondition list means that the item appearing before it should be satisfied before the item following it. CONVINCE(x,y,prop) -- x convinces y to believe prop P: FACT(y,‘FACT(x.prop)‘) A BE-SWAYED(x,y,prop) A HASCAP(y,CONVINCE(x,y,prop)) A: FACT(y,prop) D: FACT(y,NEGATE(prop)) CONVINCE causes y to adopt x’s belief as its own, but only if BE-SWAYED is true; any contradictory belief is discarded. NT’GATE is a function over strings such that NEGATFi’x’) gives the string ‘lx’. Also, note the absence of WITaL-PERFORM as a precondition of CAUSE-TO-WANT and CONVINCE; these operators will be applied when their preconditions are true, wiihout any agent explicitly “wanting” them. Agents’ data bases contain axioms involving the ACCEPT and BE-SWAYED predicates; these axioms specify conditions under which the hcarcr will accept the speaker’s facts or goals. For example, if agents A0 and Al are in a master-slave relationship, we might have the following three axioms to indicate Al’s subservience to AO’s dictates: MASTER(AO,Al) MASTER(x,y) 3 ACCEPT(x,y,act) MASTER(x,y) 3 BE-SWAYED(x,y,prop) Other axioms might model Al’s willingness to ACCER requests if his machine’s load is low, or if he owes A0 a favor; he might BE-SWAYED by A0 if he knows A0 to be reliable, or to have particularly good information about this kind of fact (e.g. A0 will know best whether a file exists on his own machine). Ordered Preconditions As explained above, the planner expects to make WILL- PEIiFORM(agcnt,opcrator) true in order to get agenf to perform operator; once this predicate is true, the operator can bc applied-at any time. WIIJ,-PERFORM will not bc made true, however, until agent accepts the operator-goal operator (because of the above axiomatization of WILL-PERFORM). Thus, all other preconditions of operalor should be true before the operator itself is adopted as a goal. Satisfaction of this principle will guarrantee multi-agent synchrony. In general, an operator-goal should not be adopted by an agent until he k~zows that the other preconditions of the operator have been satisfied. To accomplish this, we introduce the predicates WAlTING and HAS-DONE, and the operators PAUSE and WHEN-GET, defined as follows: PAUSE(agcnt,precond,aim) -- agent decides to wait until precond is satisfied before adopting aim P: W II,1 >-PERFORM(agcnt,PAUSE(agent,precond,aim)) A: FACl’(agent,‘WAITING(precond,aim)’) WHEN-GET(agcnt,precond,aim) -- agent adopts aim when he knows that precond is satisfied P: FACT(agent,‘WAITlNG(precond,aim)’) A FAC’r(agent,prccond) A HASCAP(agent,WHEN-GET(agent,precond,aim)) A: GOAI.(agent,aim) 1): FACT(agcnt,‘WAlTING(precond,aim)’) So, for example, to get agent Al to wait until agent A0 has done act G before himself doing act H, we would pass the following operator-goal to Al: PAUSE(Al,HAS-DONE(AO,G),H). This causes Al to place WAITlNG(HAS-DONE(AO,G),H) in its data base. When Al finds out (or more usually, is told) that A0 HAS-DONE G, WHEN-GET is triggered and Al adopts H as a goal. FACT(agcnt,propl) A FACT(agent,prop2) * FACT(agent,AND(propl,prop2)). Our planner employs the following heuristic to guarantee multi-agent synchrony: assume there is an operator OP with preconditions Pl through PN (some J element subset “S” of which is not already true in the initial state), and WILL-PERFORM. The planner wants agent A0 to apply OP. Expansion of the plan on Pl through PN occurs bcFore expansion of WILL-PERFORM; assume that the elements of S are made true by agent.. Al through AJ, using operators 01 through OJ respectively. ‘I&en, instead of 117 directly inducing OP’s WILL-PERFORM operator-goal through a REQUEST and CAUSE-TO-WANT, the planner satisfies it through the PAUSE and WHEN-GET operators, whose “precond” variables arc instantiated as the conjunction of J elements of the form HAS-DONE(Ai,Oi), where “i” ranges from 1 to J. Satisfaction of WHEN-GET’s second FACT precondition is accomplished by INFORMS and CONVINCES of the agents satisfying S, each of whom sends their own “HAS-DONE(Ai,Oi)” message. Finally, the planner must direct each of thcsc agents to first apply Oi, and then inform A0 that they have done so (with a HAS-DONE message). An Example A person using an Intelligent Agent at Stanford [ST] woild like file REP.PRESS at MIT to be printed on the Dover printerat CMU. The agent at Stanford knows about the following two operators (in addition to the communication operators, PAUSE and WHEN-GET operators explained above): DOVER(agcnt,filc) -- agent prints file on the Dover P: EXIST( filc,agcnt); WILL-PERFORM(agcnt,DOVER(agent,file)) A: D-PRINTED(filc,agent) I;TP-SEND(x,y,file) -- x sends file to y P: EXlST(file,x) ; WILL-PERFORM(x,FII~-SI;,ND(x,y,file)) A: EXIST(file,y) The following items appear on the Stanford agent’s FACI list (in addition to the HASCAP and WILL-PERFORM axioms listed above): (1) FACT(x,‘HASCAf’(CMU,DOVER(CMU,filc))’) (2) I~AC’l’(x,‘~IASCAI’(M1’l‘,F;l’P-Sf~Nl~(Ml’I’,CMU,Tilc))‘) (3) FACT(x.‘HASCAP(CMU,PAUSI<(CMU,prccond,aim))’) ‘(4) FA~l’(x,‘~l~~SCAP(CMU,WHEN-GEI’(CMU,prccond,aim))‘) (5) FACT(/.,‘HASCAP(x,REQUEST(x,y,act))’) (6) FAC’I’(z,‘HASCAP(y,CAUSE-‘rO-WhN’l’(x,y,act))’) (7) FACT(z,‘HASCAP(x,INFORM(x,y,prop))’) (8) FACT(z,‘HASCAP(y,CONVlNCE(x,y,prop))’) (9) EXIST(REP.PRESS,MIT) (10) FACT(x,‘MASTER(ST,CMU)‘) (11) FACT(x,‘MASTER(ST,MIT)‘) (12) FACT(x,‘BE-SWAYED(MlT,CMU,prop)‘) (13) FACT(z,‘MASTER(x,y) 1 ACCEPT(x,y,tict)‘) (14) FACT(z,‘MASTER(x,y) 3 BE-SWAYED(x,y,prop)‘) Axioms 1 through 8 list capabilities of the agents involved (actually, knowledge about these capabilities), with 5-8 stating that all agents have the basic communication primitives. Axioms 10 through 14 enlighten us about the hierarchy of control among the agents. Note that by the semantics of FACT, the axiom FACT(x,prop) in an agent’s data base implies that prop is also in his data base (i.e. if an agent knows that everyone knows prop, then hc knows prop). Figure 1 gives the expanded plan that ST constructs to fulfill the user’s goal (the communication acts are represented schematically). It involves getting MIT to first send the file to CMU and then inform CMU that the file has been sent. In turn, CMU is told to wait until notified that MlT has carried out the ETP-SEND, and then to DOVER the file. Construction of the plan proceeds as follows: working backwards from the D-PRINTED goal, ST chooses the DOVER operator to achieve it. Since the operator’s preconditions are ordered, ST expands the first precondition (EXIST) before the second (WII,I,-PI~RFORM). The WII.I,-PEKF’ORM in Figure l’s left branch does not trigger the planning heuristic, since its brother precondition “EXIS’I‘(l~EP.I’KESS,MI’I‘)” is true in the initial state. However, the WILL.-PERI;OliM in the right branch dots trigger the heuristic, since its brother precondition, “EXIS’I’(RI~P.PRI~SS.CMU)” is not initially true. Thus, the goal “I>OVl~l~(CILIU,IIEl~.I~l~ESS)” is not passed to CMU by a flF,QU I$1 and CAUSE-TO-WANT fi.om ST. Instead, ST plans for CMU to get this goal through the PAUSE and WHEN-GET operators; both of thcsc operators’ 17im variables arc instantiated to “I~OVf~R(CMU,Kf~f~.~Rt’,SS)“, while their ~>recorld variables have “IiAS-I~ONE(MI’I’.E~~P-SENI~(MI’I’,CMU,KE1~.PIIESS))” as an instantiation. WIlTIN-GET’s second FACT precondition is thus satisfied by a mcssagc from MIT, “HAS-DONI~(MIT,ETP- SENI~(MIT,CMU,l<Ef?PRESS))“. In turn, MI’T is instructed to send this message to CMU after it has, in fact, done the FTP- SEND. Acknowledgments The issues prcsentcd in this paper have been greatly clarified through many useful discussions with Mike Gcncsereth. This research is supported by ONR Contract # N00014-81-K-0303. REFERENCES DI Davis, R. and R. G. Smith, “Negotiation 2s a Metaphor for Distributed Problem Solving,” Artificial Intelligence 1,aboratory Memo No. 624, Massachusetts Institute of Technology, Cambridge, MA (May 1981). PI Steeb, R., S. Cammarata, F. A. IIaycs-Roth, P. W. Thorndykc and R. B. Wcsson, “Distributed Intelligence for Air Fleet Control,” R-2728-ARPA, Rand Corporation, Santa Monica, CA (October 1981). t31 Davis, R., “A Model for Planning in a Multi-Agent Environment: Steps Toward Principles for Teamwork,” AI. Working Paper, Massachusetts Institute of Technology, Cambridge, MA (June 1981). Moore, R. C., “Reasoning About Knowlcdgc and Action,” in Proc. ZJCAZ-5, Cambridge, Massachusetts, August, 1977, pp. 223-227. PI Moor-c, R. C., “Reasoning About Knowledge and Action,” Artificial Intclligcncc Center Tcchnicnl Note 191, SRI International, Menlo Park, California (1980). bl hppclt, D., Plaming Natural I.aqttage Ilttcmnces to [lo] Cohen, P. R. and C. R. Perrault, “Elements of a Plan-Based Safisfj, Mdti’le Goals, Ph.D. thesis, Stanford Ilnivcrsity, Theory of Speech Acts,” Cogilitive Science, 3 (3), pp. 177- Dcccmbcr 1981. 212 (1979). 171 Appclt, I). E., “A Planner for Reasoning about Knowledge and Action,” Proc. of the fi’irst AmuaZ Confererrce oj‘ the American Association for ArtiJicial Intelligence, Stanford, California, August, 1980, pp. 131-133. PI Konolige, K. and N. J. Nilsson, “Multiple-Agent Planning Systems,” Proc. of the First Armal Corlference of the American Association for Artzfzcial Intelligerrce, Stanford, California, August, 1980, pp. 138-141. 191 Nilsson, N. J., Principles of Artificial IMelligence, (Menlo Park: Tioga Publishing Co., 1980). FIGURE 1 D-PRINTED(REP.Pl?ESS,CMU) ,.k==-\ ~OVER(&h&E3~B$PRESS) / ----x EXlST(REP.PRESS,CMIJ) WILL-PERFORM(CMU, DOVER(CMU,REP.PRESS)) CL EXIST(REP.PRESS,MIT) Ad / \ HASCAP(CMU, DOVER(CMU,REP.PRESS)) GOAL(CMU, ‘DOVER(CMU,REP PRESS)‘) I WILL-PERFC!RM(MIT, .-4-Y FTP-SEND(MIT,CMU,REP.PRESS)) ~ ,,-. +j-~,>y EN-GET WILL-PERFORM(CMU,PAUSE(CMU...)) GOAL(MIT,‘FTP-SEND GOAL(Mll‘,‘lNFORM(MIT,CMU, (MlT,CMU,REP.PRESS)‘) ‘HAS-DONIT(MIT,FTP-SEND (MlT,CMU,REP.PRESS))‘)‘) GOAL(CMU, ‘PAUSE(CMU....)‘) HASCAP(CMU, PAUSE(CMU...)) STANFORD = REQUEST and CAUSE-TO-WANT = INFORM and CONVINCE 119
1982
53
149
A SEARCH PROCEDURE FOR PERFECT INFORMATION GAMES OF CHANCE: ITS FORMULATION AND ANALYSIS Bruce W. Ballard Dept. of Computer Science Duke Universitv Durham, N.C. 27306 ABSTRACT An algorithm i s developed for searching the trees of "perfect information" games involving chance events. Many dice games (e.g. backgammon, craps, and monopoly and similar board games), and some card games (e.g. casino blackjack), have this property. For depth 3 trees, empirical observation reveals a search reduction of more than 50 percent, while closed-form analysis reveals a best-case complexity of O(N**2) a substantial savings over This represents the O(N**3) behavior of the "obvious" search strategy. I INTRODUCTION Many games involving chance events, such as the roll of dice or the drawinq of playing cards, can be introducing modeled by "probability" nodes into standard m ininax trees. We use the symbols + and - to denote maximizing and minimizing nodes, respectively, and * (pronounced "star") to denote a probability node. We define the value of a * node as the weighted average of the values of its successors, which may occur with differing probabilities. We shall develop and evaluate the performance of an alqorithm to search *-minimax trees efficiently. In this paper, we assume that all descendents of a * node are equally likely. The algorithm we present can be extended, in a direct way, to the more qeneral case. For the most part, *-minimax trees, as we shall call them, retain the properties of ordinary minimax trees. In particular, they pertain to 2-oerson, O- sum, perfect information games. BY "perfect information" we mean that neither This research has been partially supported by AFOSR, Air Force Command, AFOSR Sl- 0221. The author wishes to express appreciation to Dr. Donald Loveland and Tom Truscott for discussing portions of an earlier draft of this paper. player conceals information about the current state of the 9me I or possible future states, that could be useful to the other player. Many dice qames (-7. craps, backqammon, and monopoly and similar board games) satisfy these criteria, a4 do some card games (e.g. casino blackjack). Figure 1 gives an example *-minimax tree. Backed-up values for non-terminal nodes are shown in oarentheses. The value of the * node has been computed as (2 - 4) / 2 = -1. + (3) / \ / \ / \ - (3) * (-1) / \ 1 \ / \ / \ 4 3 -4 / - l"' / \ 2 3 Figure 1 - A Sample *-Minimax Tree II THE *-MINIMAX SEARCH PROBLEM In searching *-minimax trees, we want to retain the alpha-beta "cutoff" power of ordinary minimax trees. However, the presence of * nodes provides opportunities for additional forms of cutoffs. Recognizing that lower and upper bounds on the value of a * node can be derived by exploring one or more of its children, we have devised an alqorithm which can reduce search complexity by more than 50 percent with random ordering of successor nodes, and by an order of maqnitude with optimal ordering. As an example of a pass ibl-e " * cutoff", suppose the (leaf) values of a particular tree are inteqers between 0 and 10, inclusive, and that a * node with 4 equally likely successors has had 2 of its successors searched. This situation is shown in Fiqure 2. 111 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. * <----- 2 <= val <= 7 Starl(board, alpha, beta) local A, B, i, v, vsum, AX, BX, s[l; Figure 2 - Interim Bounds on a * Node Knowing the values of these 2 children, we can say that the smallest possible value for the * node is (5 + 3 + 0 + 0) / 4, or 2. Similarly, the greatest possible value of the * node is (5 + 3 + 10 + 10) / 4, or 7. Thus, a cutoff can occur if the alpha value passed to * is >= 7, or if the beta value is <= 2. We shall formulate a search strategy to take advantage of this form of potential cutoff. In addition, our strategy will compute new alpha and beta values for use below * nodes. III AN ALGORITHM FOR *-MINIMAX TREES - Let L and U denote lower and upper bounds on all possible game (leaf) values. Let Vl, V2, . . . . VN be the values of the N successors of a * node, whose i-th successor is about to be searched. After returning from the i-th node, a cutoff will occur if (vl+ . ..+Vi-1) + Vi + U*(N - i) <= alpha N or if (vl+ . ..+Vi-1) + Vi + L*(N - i) >= beta N Alpha and beta values for the i-th successor are given by Ai = N*alpha - (Vl+...+Vi-1) - U*(N - i) Bi = N*beta - (Vl+...+Vi-1) - L*(N - i) where "alpha" and "beta" are the alpha- beta values of the present * node. These equations suggest that A and B be initialized by Al =N* (alpha - U) + U Bl = N * (beta -L)+L and updated by An+1 = An + U - Vn Bn+l = Bn + L - Vn The following Star1 procedure implements this strategy, making use of (1) a Term procedure, to evaluate terminal positions; (2) an Eva1 procedure which, depending on which player is to move next, invokes either Max or Min; and (3) a procedure to generate the successors of a node. determine the N successors sl,s2,...,sN if (N == 0) return(Term(board)); A = N * (alpha - U) + U; B = N * (beta - L) + L; vsum = 0; for (i=l; i<=N; i++) f AX = max(A, L); BX = min(B, U); V = Eval(s[il, AX, BX); if (v <= A) return(alpha); if (v >= B) return(beta); vsum = vsum + v; A= A+U-v; 1 B =B+L-v; return(vsum / N); 1 An example of the way in which * nodes use and create new alpha-beta values is suggested by the partially searched *- minimax tree given in Figure 3. + / \ 4 /-\ 5 z/j\ <----------- <4 I 5' . . . I /-\ <----------- <o , lo> 3 * <------- <O I 3' / \ -3 . . . I + <------- x-7 , lo> Figure 3 - A Partially Searched *-Minimax Tree (assuming -10 <= Leaf Values <= +lO) IV AN "IMPROVED" ALGORITHM - In typical *-minimax games, where players take turns in making moves, a given * node has either all + successors or all - successors. Consider a particular * node whose succesors are all - nodes. If this node is worse than a previously searched * node, a preliminary "probing" of just one child of each - node can substantially reduce the number of nodes explored before a cutoff occurs. If Wi denotes the value of some child of the i-th - node, and Vi denotes the (true) value of the i-th - node, we will obtain a cutoff below the * node if 112 (vl+ . ..+Vi-l)+Vi+(Wi+l+...+WN) which yields an Ai value of Ai = alpha*N-(Vl+...+Vi-l)-(Wi+l+...+WN) From this formlIla we develop a modified Star2Min procedure (given in Sallard[82]) for * nodes with all - children. A similar Star2Max procedure applies to * nodes having all + children. V ANALYSIS OF *-MINIMAX ALGORITHMS - Most analyses of ordinary alpha-beta have considered so-called "complete" N-ary trees, where all leaves occur at a fixed depth D and all non-terminal nodes have exactly N successors. In most of the *- minimax games we have studied, chance events are use:1 primarily to determine a player's set of leqal moves. We therefore ?lefine the class of * -complete N-ary trees by inserting a * -A- node above each node of a complete N-ary tree, and giving these * nodes N-l additional successor nodes of the same type. The leftmost part of a *- complete 2-ply binary tree appears in Figure 4. We have investigated the efficiency of Star1 and Star2 on depth 3 *-complete N-ary trees, since they correspond to trees allowing the simplest cutoffs of standard alpha-beta. * / \ / \ + + / \ / \ * * / \ / \ /-\ / \ Figure 4 - The Leftmost Portion of a 2-Ply *-Complete Binary Tree As an attempt to capture the sorts of dependencies that occur in practice, we follow Fuller et a1[731 by assigning distinct, uniformly spaced values to the arcs below a node, and defining leaf values as the sum of the arc values on the path to it from the root. In Ballard[82] we derive the following: Result 1: The asymptotic best-case behavior of algorithm Star1 on the + node of a *-complete 2-ply N-ary tree is to exainine approximately 0.7211 N**3 of the N**3 leaves beneath it. Result 2: The asymptotic best-case behavior of algorithm Star2 on the + node of a *-complete 2-ply N-ary tree ' to examine approximately lls7211 N**2 of the N**3 leaves beneath it. The latter result is encouraging because, like the 0 (N**2) best case alpha-beta result for depth 3 trees, it shows that a wise algorithm can hope to reduce seatch complexity by a factor of N. For the most part, we achieved this reduction without increasing the coin-olexity of the algorithm, its overhead, or the additional space needed. Table 1 indicates search savings for various values of N. Note the rapid convergence of Star1 toward the 72.11 percent region predicted by the asymptotic figure qiven above. Since average-case analysis is much more difficult than best-case analysis, we decided to investigate expected-case performance by empirical ~t?d;ls. iJail?q the UNIX pseudo-random number qenerator, we generated and qathered statistics on 1000 *-complete trees for each of several branching factors. Table 2 summarizes the results of Star1 nerformance. It can be seen that the average case savings is about 21 percent, or rouqhly 2/3 of the best-case savinqs (given above). Table 3 summarizes the results of Star2 performance. An interestinq result was that roughl\l half the * nodes for which a cutoff occurred were cut off during the probing phase. Also, we see that for a branching factor greater than about 20, Star2 looks at fewer than half the leaves explored by Starl. VI INCORPORATING *-MINIMAX SEARCH INTO A GAME-PLAYING PROGRAY -- In programming actual minimax games, adjustments are often made to a pure alpha-beta search because of the overwhelming size of most search trees. In particular, a static evaluation function is generally used to rank successor nodes in what appears (before searchinq) to be best-to-worst order, hopinq to assure early cutoffs; a depth - bound is often maintained in some form to preclude searching prohibitively deer, nodes; forward pruning is performed, meaning that some nodes which look unpromising are not searched at all; a transposition table is maintained to avoid searching the same position more than once if it appears ' several places ("transpositions") iAnthe search tree; and so forth. In practice, we would expect such modifications to be made to the *- minimax procedures as well, although the underlying algorithms need not be changed. 113 N 2 4 6 8 10 20 30 40 Procedure Star1 Number 5 40 138 336 670 5560 18990 45320 Percent 62.5 62.5 63.9 65.6 67.0 69.5 70.3 70.8 Procedure Star2 Number 5 25 58 105 166 677 1532 2732 Percent 62.5 39.1 26.9 20.5 16.6 8.5 5.7 4.3 Table 1 - Best-Case Leaf Exploration of Star1 and Star2 for Various *-Complete 2-Ply Trees N 2 4 6 8 10 20 30 40 Number Percent 7.1 53.9 178 418 810 6389 21382 50425 88.8 84.1 82.5 81.6 81.1 79.9 79.2 78.8 Table 2 - Average-Case Leaf Exploration of Star1 for Various *-Complete 2-Ply Trees N Cutoffs Probing Regular Leaves Seen Number Percent 4 6 8 10 20 30 40 1.3 2.0 2.8 3.5 8.1 12.6 17.4 0.7 1.5 2.5 3.5 8.4 13.4 18.3 48 139 293 531 3341 10109 22390 75.4 64.5 57.3 53.1 41.8 37.4 35.0 Table 3 - Average-Case Leaf Exploration of Star2 for Various *-Complete 2-Ply Trees VII SIJMMARY We have developed an algorithm For searching trees pertaining to "perfect information" games involving chance events. We analyzed the average-case complexity of the algorithms empirically, and observed a savings of more than 50 percent. Closed-form analysis reveals a best-case complexity of O(N**2), a substantial savings over the O(N**3) behavior of the "obvious" search stragety. Our strategy can be adapted, as ordinary "alpha-beta" searching has been, to take advantage of special features of a particular game. REFERENCES Ballard, B. W. A search procedure for *- minimax trees. Technical Report, Dept. of Computer Science, Duke IJniversity, Durham, N.C. (1982). Baudet, G. M. On the branching factor of the alpha-beta pruning algorithm. Artificial Intelliqence 10 (1978) 173-199. Fuller, S. H., Gaschnig, J. G. and Gillogly, J. J. An analysis of the alpha-beta pruning algorithm. Dept. of Computer Science Report, Carnegie-Mellon University (July 1973). Knuth, D. E. and Moore, R. W. An analysis of alpha-beta pruning. Artifical Intelliqence 6 (1975) 293-326. Newborn, M. M. The efficiency of the alpha-beta search on trees with branch-dependent terminal node scores. - Artifical Intelligence 8 (1977) X37-153. 114
1982
54
150
USING TEMPORAL ABSTRACTION TO UNDERSTAND RECURSIVE PROGRAMS INVOLVING SIDE EFFECTS Joachim Laubsch Universitaet Stuttgart Stuttgart W. GERMANY ABSTRACT This paper develops the notion of temporal abstraction, used originally for the automatic understanding of looping constructs, to account for a class of recursive programs involving side effects upon a relational data base. The programs may involve compositions of several side effects, and these side effects can occur either during descent or upon ascent from recursive calls. I INTRODUCTION The concept of 'temporal abstraction' was developed by Waters [5], and Rich & Shrobe [4] to describe the variables enumerated in loops as a set of objects which could be manipulated as a who= We apply this principle to recursive procedures operating on threaded data structures and use it to understand compositions of several side effects. Our analysis is part of a larger project designed to help novice programmers understand their buggy programs ml, [IsI>. The novices are students learning a LOGO-like language called SOLO [I], in which a procedure can only side effect a global data base which is a labelled, directed graph. Here is a sample problem, of the kind posed to our students: "Define a procedure INFECT which would convert a data base such as the one shown in Fig. 1 to the one shown in Fig. 2 if invoked as INFECT ANDY." I --a-----> INOCULATED <-mm.; lis Ii.9 kisses I kisses kisses 1 ANDY-------->BARBARA-------->COLIN-------->DIANA Fig. 1 ;---------> INOCUJ,ATED <-----; [is /is kisses I I kisses kisses 1 ANDY-------->BARBARA-------->COLIN-------->DIANA I I I I I gets I Igets ---------> FLU <-----------~ Fig. 2 Marc Eisenstadt The Open University Milton Keynes ENGLAND The primitives for storing, deleting, and retrieving relational triples are called NOTE, FORGET, and CHECK. CHECK provides for conditional branching depending upon its success, and also allows simple pattern matching, as the example below illustrates: SOLUTION-l: TO INFECT /X/ 1 EXAMINE /X/ 2 CHECK /X/ KISSES ?Y 2A If present: INFECT "Y; EXIT 2B If absent: EXIT TO EXAMINE /X/ 1 CHECK /X/ IS INOCULATED IA If present: EXIT 1B If absent: NOTE /X/ GETS FLU; EXIT INFECT recursively generates successive nodes along the thread of 'kisses' relations (steps 2 and 2A), and invokes EXAMINE at each node. EXAMINE conditionally side effects each node, i.e. asserting that /X/ GETS FLU only when the triple /X/ IS INOCULATED is absent. This problem is typical of a large class of tasks given to beginning SOLO users. Students may adopt a variety of methods for tackling the stated problem, combining side effects to achieve the desired result. The next two sections describe how we cope with different varieties of recursion and how the accumulation of side effects is represented. Section IV then illustrates how deviant cases are handled. II A CLASS OF RECURSION SCHEMATA --- In a recursive procedure such as INFECT, a side effect (conditional or unconditional) may occur logically at any of five locations, depending on the juxtaposition of the side effect, the recursive call, and the termination test. Here is a skeleton of the INFECT procedure, with the five possible locations of side effect occurrences shown underlined (not all five can coexist, and the SOLO user must make careful use of the control-flow keywords CONTINUE and EXIT to obtain certain combinations): 400 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. TO INFECT /X/ . . . <initial> The effect steps can be classified according to time of execution and range of nodes on which the effect occurs: . . . CHECK /X/ KISSES ?Y If present: <pre-ret>; INFECT *Y; <post-ret> If absent: <termination> . . . <final> These occurrences are depicted schematically in Fig. 3, which shows a recursive procedure P partitioned into its possible constituents. The effect of each .solid box in the figure, in accordance with the notion of temporal abstraction, is to add or delete some set of triples. The overall effect of P will be the composition of these, as described in section III. 'Ret' is short for 'recursion', 'self' is the actual recursive invocation of P, and 'get-next' is an implicit enumeration function which retrieves the next node in the thread (this happens automatically during pattern-matching in SOLO, e.g. when Y is bound to BARBARA, then COLIN, then DIANA). .--A.- I--+ 'self: !---r -' FINAL L-l EFFECT Fig. 3: The recursion schema for thread enumeration Any of the effect steps may be conditional, composite, or missing altogether. The restrictions for recognizing a program as an instance of this schema are: (1) P has a parameter x which is the beginning of a thread; recursive invocations of P thus enumerate successive nodes along the thread; (2) the recursion and termination steps involve the enumerated node and relation R, where R is non-cyclic and one-to-one or many-to-one in the database (we don't deal with one-to-many mappings of R, which would exist if, say, the triples ANDY KISSES BARBARA and ANDY KISSES DIANA were both present in the data base); (3) the side effects do not alter the thread and the side effected node can only be reached from the enumerated node via one-to-one relations. RANGE OF ENUMERATION ------------- entire thread butlast of thread last of thread I I I I- I TIME OF EXECUTION OF EFFECT -- -- descent ascent INITIAL FINAL EFFECT EFFECT PRE-REC POST-REC EFFECT TERMINATION EFFECT EFFECT The important insight of temporal abstraction is that the nodes enumerated during recursion can be dealt with as a set. Thus, any of these steps has an effect which can be represented as a set of triples (which we call 'db-set'). The termination step is the degenerate case of a singleton set. We describe db-sets as follows: [db-set typical member: (<filter> => <side effect>) init: (<enumerated node> . <first value>) ret rel: <thread link> termination: (ABSENT [<ref> <thread link> ?I)] where <filter> ::= T 1 <simple filter> 1 (OR <conjunctive filter> . ..) <simple filter> ::= (PRESENT <triple>) 1 (ABSENT <triple>) <conjunctive filter> ::= (AND <simple filter> . ..) <side effect> ::= (+ <enumerated node> <link> <node> ) 1 E 3 (- <enumerated node> <link> <node> ) <ref> ::= <enumerated node> ( (get <ref> <link>). By (get <node> <link>) we denote the reference from <node> via <link>. Thus <ref> is the n-fold composition of the reference from the <enumerated node> (called e) along <link>. For example, the node BARBARA in Fig. 1 can be referenced by (get ANDY KISSES), whereas the node DIANA can be referenced by the composition (get (get (get ANDY KISSES) KISSES) KISSES) An instantiated schema does not, of course, refer to specific nodes in the data base, but rather to a generalised description of a typical node along the thread. How, then, are db-sets derived from an instantiated recursion schema? We fill the slot 'typical member' from the effect of the step (e.g. a NOTE of some triple is a + with its first argument replaced by a <ref> involving e). If the effect is unconditional, the filter is T, otherwise the condition is taken as the filter. The 'init' slot is the first value that e will take. Enumeration stops when termination is true. Steps 401 which work on the entire thread have (ABSENT [e <thread link> ?I) as their termination condition, whereas those working on the butlast of the thread have (ABSENT [(get e <thread link>) <thread link> ?I) as their termination condition. Consider the instantiated schema for SOLUTION-l above, which contains only an initial (conditional) effect. Its effect description is as follows: [db-set typical member: ( (ABSENT [e Is mocuhmm] ) => (+ [e GETS FLU])) init: (e . x) ret rel: KISSES termination: (ABSENT [e KISSES ?])I III COMPOSITION OF RECURSIVE SETS BY SYMBOEC EVALUATION - In general, a recursive procedure may comprise some combination of effect steps. To describe the net effect of the entire procedure, the individual effects must be composed. For instance, the addition of a set of triples followed by the conditional deletion of some elements should have the composite effect of asserting some elements of the set conditionally upon the negation of the condition for deletion (e.g. see SOLUTION-2 below). This simplification is done during symbolic evaluation. Although the db-set for each step is a temporal abstraction (and hence ignores the order in which the nodes are enumerated), compositions of side-effects are sensitive to temporal order. Thus, we first compose those effects which occur on the descent and then those which occur on the ascent. The effects are still dealt with as db-sets, i.e. the inner details of the enumeration sequence are ignored. All we need to worry about is whether the db-set is of the 'descending' or 'ascending' variety, which we know from its postion in the instantiated schema. Composition proceeds as follows: At a node in the symbolic evaluation tree (called S-node) where a db-set is to be asserted, we grow a branch with a description of the range of values that could be taken by the enumerated node. The range (e 1 e in R*(x) 1 says that e can take the value of all the nodes in the transitive closure of R starting at x. At the S-node at the end of this branch, the typical member of the db-set is asserted. If there is a (non-T) filter, we split into two branches, one with the condition and the other with its negation, and add the side effect to the S-node at the end of the branch where the condition is true. The next db-set is dealt with in the same way at all terminal S-nodes grown so far. If it has a filter, we test it at each S-node: if the sets have the 'same range, it may be possible to show either that the condition must always hold or that it can never hold, thus saving the split. If the ranges overlap, we introduce one branch for the overlapping range and others for the non-overlapping parts. On the overlapping sets it is then possible to test conditions or apply rules for cancellation (i.e. a NOTE followed by a FORGET of the same triple has no net effect) and overwriting (i.e. a NOTE following another NOTE with the same triple has no further effect). A special case arises if two consecutive assertions of sets are interleaving (i.e. they have the same time of execution of effect), and the second set's enumerated node (e2) 'runs ahead of' the first set's enumerated node (el). We say that e2 'runs ahead of' el if e2 is an n-fold composition of the reference from el along the thread-link. Since the effect on e2 occurs before the effect on el, we reverse the order of composition of both sets and apply the simplifications described above. Finally, starting from the terminal S-nodes we collect effects and conditions for all nodes with the same net-effect into one db-set with the appropriate filter. The result is the description of the program used for comparison with the ideal effect description. IV EXAMPLES Here are two alternative solutions to the problem posed in section 1. One has a bug. SOLUTION-2 (Successive sweeps): TO INFECT /X/ 1 CONTAMINATE /X/ 2 DECONTAMINATE /X/ TO CONTAMINATE /X/ 1 NOTE /X/ GETS FLU 2 CHECK /X/ KISSES ?Y 2A If present: CONTAMINATE "Y; EXIT 2B If absent: EXIT TO DECONTAMINATE /X/ 1 CHECK /X/ IS INOCULATED IA If present: FORGET /X/ GETS FLU; CONTINUE 1B If absent: CONTINUE 2 CHECK /X/ KISSES ?Y 2A If present: DECONTAMINATE "Y; EXIT 2B If absent: EXIT SOLUTION-3 (Ascending conditional side effect): -- TO INFECT /X/ 1 CHECK /X/ KISSES ?Y 1A If present: INFECT *Y; CONTINUE IB If absent: EXIT 2 CHECK /X/ IS INOCULATED 2A If present: EXIT 2B If absent: NOTE /X/ GETS FLU; EXIT In SOLUTION-2, the plan diagram for INFECT matches a schema for a conjoined effect. The plan diagram for the first of these, CONTAMINATE, matches the recursion schema for thread enumeration: an unconditional side effect occurring only in the 'initial effect' slot of the schema. It thus has the net effect: 402 for all {e 1 e in KISSES*(x)] (+ [e GETS FLU]) The second of the conjoined effects, DECONTAMINATE, matches the same schema, except that the side effect in the 'initial effect' slot is conditional. This yields the following net effect description: for all {e 1 e in KISSES*(x)] (PRESENT [e Is INocuixm]) => (- [e GETS FLU]) During symbolic evaluation, the two S-nodes are recognized as having the same range, so the evaluator simplifies their combined effect to be: for all {e 1 e in KISSES*(x)1 (ABSENT [e IS INOCULATED]) => (+ [e GETS FLU]) which is precisedly the intended effect of the ideal INFECT procedure. SOLUTION-3 has a plan diagram which matches that of the recursion schema with just a (conditional) 'post-ret' effect. The reader may wish to verify that this conforms with the skeleton depicted at the beginning of section II and the schema shown in Fig. 3 (the CONTINUE at step lA in effect places step 2 in the 'post-ret' position). Notice that in this solution the first thing that normally happens is the recursive invocation of INFECT (step lA), which means that the side effect only happens when ascending. Our schema knows that a 'post-ret' effect on its own fails to reach the final node in the thread (in this case it is due to the EXIT at step IB). This is instantiated from the schema's canned effect description as follows: for all {e f (get e KISSES) in KISSES*(x)! (ABSENT [e IS INOCULATED]) => (+ [e GETS FLU]) That is, the conditional side effect is perpetrated only on the butlast of the thread running from x via KISSES. For the example of Fig. 1, SOLUTION-3 happens to work. However, a counter-example can be generated for the student in the following way: generate a thread in which the final node (f) of the thread satisfies the condition specified in the schema's effect description, i.e. INOCULATED]). For Fig. (ABSENT [f IS 1, this would amount to deleting the IS link between DIANA and INOCULATED. In such a case, SOLUTION-3 will fail. This counter-example can be used to point out the inherent flaw in the student's solution. V CONCLUSION Temporal abstraction provides us with a powerful mechanism for reasoning about side effects on sets of data objects. Our method of composing several such sets lets us analyse recursive procedures involving side effects on threaded data structures. Static set descriptions, which are the essence of temporal abstraction, can be combined using simplification rules derived from symbolic evaluation techniques. A library of recursive schemata enables us to analyse a range of students' programs, to combine set descriptions in a sensible way (e.g. composing descending and ascending effects in the right order), and to generate tailor-made counter-examples for programs which might 'work' by accident. REFERENCES hl PI [sl [41 L.51 Eisenstadt, M. Artificial intelligence project. Units 3/4 of Cognitive psychology: a third level course. - - - Milton Keynes: Open University Press, 1978. Eisenstadt, automated M. & debugging Laubsch, J. assistant Towards for an novice programmers. Proceedings of the AISB-80 conference Amsterdam, 1982 Artificial -- Intelligence, Laubsch, J. & Eisenstadt, M. Domain specific debugging aids for novice programmers. Proceedings of the Seventh International Joint --- Conference Artificial Intelligence (IJCAI-81). VaEouver, BC, CANADA, 1981. Rich, C. & Shrobe, H. Initial report on a LISP programmer's apprentice. IEEE Transactions on Software Engineering, SE-4:6, - 1978. Waters, R.C. A method for analyzing loop programs. IEEE Transactions on Software - Engineering, SE-5:3, 1979. 403
1982
55
151
GIST English Generator Bill Swartout USC/information Sciences Institute 4676 Admiralty Way Marina del Rey, CA 90291 Abstract This paper describes a prototype English generator which can produce English descriptions of program specifications written in Gist, a program specification language being developed at ISI. Such a facility is required because although Gist is a high level specification language, specifications written in it, like those in all other formal specification languages, are unreadable. There are several reasons for this unreadability: strange syntax; redundancy elimination; lack of thematic structure; implicit remote interactions; m representation of the motivation or rationale behind the specification; and a strict reliance on textual presentation. The current generator deals with the first two problems ard part of the third. Our plans for dealing with the rest are outlined after a description of the current generator. 1. Introduction Gist has been extensively described elsewhere [l, 21; only a brief overview will be given here. Gist attempts to ease the burden of software design and maintenance by allowing the user to indicate what behavior he wants without requiring him to state how it is to be achieved. Gist frees its users from typical implementation concerns such as cwutational efficiency, method selection, and explicit data structure choice. We are exploring ways of producing efficient programs fran Gist specifications by using a library of transformations which map out the specification freedoms Gist ,permits [6]. It is envisioned that the user will select the transformations to apply and the computer will apply them and keep a record of the applications. We expect that systems developed in this fashion will be easier to maintain because it is the specification that will be modified, rather than the highly tuned and This research is supported by the Air Force Systems -and, F&me Air Development Center under contract No. F30602 81 K 0056. Views and conclusions contained in this report are the author's and should not be interpreted as representing the official opinion or policy of RADC, the U.S. Government, or any person or agency connected with them. optimized implementation code. Furthermore, the computer record of the original development will not only document that development, but also allow substantial portions of the original development to be recovered and reused after the modifications are made, thereby substantially speeding up the re- development of a working system. 2. Gist English Generator Given a Gist specification and a small amount of additional grammatical information needed for translation (detailed below), the Gist English Generator produces an English description of the specification. There are several reasons why such a capability is important for Gist or any specification language. First, since a specification is often used as a "contract" between a custcxner ard an implementor, it is important that all those concerned be able to understand the specification. Since custaners will frequently be unfamiliar with the formal specification language, a capability for making such formal specifications understandable is needed. Second, an English translation capability can provide an alternate view of a formal qpecification and, hence, be useful as a debugging aid even for those familiar with the formalism such as the specifier himself. Although the English generator has been operational for only a short time, it has already made several specification errors more apparent to us. Third, since good high level specification languages embody constructs and make default assumptions that are unfamiliar to those trained to use traditional programming languages, an English generator can serve as a pedagogical aid by re-casting a specification in English, thereby shortening the time required for familiarization with both the specification language and specifications written in it. A goal for the generator was to have it produce English directly from the specification as much as possible, as opposed to requiring the specification writer to supply substantial amounts of additional information about how the specification should be translated into English. Achieving this goal has two major benefits. First, the translation will depend more on the specification itself than on separate information required for translation, thus there is more certainty that changes in the specification will appear in the translation. Second, only minimal additional effort will be 404 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. required on the part of the specification writer for his specifications to 'be translated into English. Given the generally dismal history of program documentation, this is an important consideration in assuring that the generator can be used with most specifications. We recognized that to achieve this goal, specifications would have to be written following a certain style. To aid us in defining this style and to assure that it be as natural as possible, we examined existing Gist specifications to determine haw the constructs of Gist were being used, and what were appropriate English translations for them. We found that most Gist forms could be mapped into English using the information supplied by the specification alone, but that relations (particularly attribute relations) and action declarations were used in several ways with differing English translations (described below). The generator uses heuristics to attempt to determine what translation should be employed for these constructs. When the heuristics are insufficient, the user can indicate the proper translation by providing additional information with the specification. 2.1. Generator Organization The Gist English generator uses three passes and an intermediate case grammar representation [4]. The first pass of the generator examines the Gist specification and creates a case grammar representation of the English to be produced. The second pass performs inter-sentence transformations on the case grammar representation to improve the quality of the English description that will be produced. For example, this pass conjoins sentences where possible ti reduce the wordiness of the explanation. The third pass uses the transformed case grammar to produce actual English. The third pass also performs some intra-sentence optimizations. One such transformation was motivated by the observation that sentences which mention a definite reference first followed by an indefinite are clearer than those in which the two are reversed. For example, "A pier of the manager's port must be the pier p'-isnot arclear as "JI'he pier E must be a pier of the manager's .- - - - ,port". This organization has some distinct advantages. The multiple representations provide appropriate points for making transformations. For example, transformations which are primarily concerned with English, such as conjunction insertion, are most appropriately made on the case grammar. It would be much more awkward to make such transformations during the first pass. Another advantage is that the case gramnar and passes twc and three of the generator are independent of the particular representation used for Gist specifications. Thus, the portions of the generator that embody English knowledge can be exported to other applications. Text generation has been investigated by a number of researchers (see [7] for a current bibliography). Doris Katz has produced a generator which is perhaps most similarly structured to the one presented here [5]. He has concentrated more on pass two, that is, on transforming the primitive English description formed in pass one into a more fluent explanation. In the Gist generator, the most difficult task has been forming the primitive explanation, since Gist and English are often quite different in terms cf what they can represent easily. 2.2. Translating Attribute Relations Attribute relations and action declarations require a richer set of translations than other Gist constructs. This section and the next outline how their English translation is performed. Attribute relations are binary relations declared as part of type declarations. For example, the type declaration: type ship (Destination 1 port); declares the type ship and a relation Destination between ships and ports. When used elsewhere in Gist, :Destination is used to indicate the mapping from ships to ports, and ::Destination is used for the back-mapping from ports to ships. For example, (1) ship:Destination refers to the destination of a ship, and (2) port::Destination refers to a ship whose destination is the port. 2.2.1. Kinds of Attribute Relations We have identified three major uses of attribute relations which have different translations. English The first use is illustrated by the example given above. To translate the forward mapping, the relation name may be used as a noun modified by a genitive form of the declared tvne. Thus (1) above- translates as "the destinatioG&of J& ship" or "the ship's destination'. % translate the back-mapping-m qenerate a noun phrase whose head noun-is the type of object being referred to (in this case, ship) modified by a relative clause indicating the relationship. Thus, (2) translates as the port". "the ship whose destination is In the absence of other informatioz this kind of translation is employed by the generator. - Attribute relations are also used to indicate "part of" relations. In the specifications we examined, this type of relationship was usually indicated by giving the relation the same name as the attribute type, as in: type auto (Engine 1 engine); 405 This declares that every object of type auto has an attribute called Engine, whose type is e+e. The translation for part-of relations is similar to those described -above, but the verb "have" or "belong" is used in place of "is". Thus, the type declaration itself would translate as "Each auto has an engine", and forward mappings auto:Eqj -- fran autos to engines would translate as "the auto's engine", 7 while backmappings auto::Engine -- fran engines to autos would translate as "the auto that has the engine". The "part of" translaoz used by the generator whenever the name of the relationship is the same as its attribute type. Finally, attribute relations are scmetimes used as verbs, as in: type pier (Handle / cargo); The translation for this type declaration is "Each pier handles a cargo". The type being declardx taken to be the subject and the attribute type is the object. pier:Handle translates as "The cargo which is handled 4 the ---- pier", and - cargo::Handle translates as "The pier which handles the cargo". 7 --- One problem with the verb forms that the translation can be very awkward if it appears deeply embedded within other forms. The generator recognizes such situations and uses several sentences to describe these embedded forms, automatically introducing intermediaries as needed (an example appears below). The specifier must indicate that an attribute relation is to be translated as a verb by placing a verb property on the name of the relation. This is one of the small grammatical additions required for natural language generation. 2.3. Translating Actions Actions correspond to verbs. In the specifications wa have encountered so far, the name given to an action is either an English verb, or it is a ccqoun3 name (e.g. P%oveShip or Replace- Line) where the first element of the compound corresponds to the verb. If a name is not a compound, the generator assumes that it is an English verb. If it is a compound," the generator assumes that the first element of the coqound is the verb. *The generator knaws about stylized ways of creating ccmpounds, including separating by upper and lower case, hyphens and underscores, and it can break these coqounds apart. The parameters in an action correspond to cases in a case grammar. The generator knows hew to map a fixed set of cases into English. The user must supply the generator with annotations for each action declaration telling it the action's parameter/case correspondences. This is the second (and final) grammatical addition required for natural language generation. Currently the generator allows six cases for parameters. We eqect this number to grow slightly as we gain more experience with a wider variety of specifications. The current cases include: Agent, the thing or person performing the action. BecOllES the subject of declarative sentences. - (The manager moves the ship.) Object, the thing or person upon which the action is performed. Becomes the subject of passive sentences. (John kicked the ball.) -- Instrument, the thing used to perform the action. When translated to English, an Instrument is preceded by the preposition "with". (I dug the hole with a shovel.) ---- Dative, corresponds to the indirect object. When translated, the Dative case is preceded by "to". (I gave the ball to - the boy.) Directional, indicates the object toward which the action is proceeding. This case is also preceded by "to". (Move the ship to the pier.) -- . Locative, nouns in this case indicate the location of supplies the be used with at the pier.) -- For exarrp?le, tc ) the action. The user appropriate preposition to this case. (The ship sank translate the action: action MoveShip[s 1 ship, p 1 pier] The user muld inform the generator that the first parameter was the Object and the second was the Directional. The action would then translate as: "Move the ship s to the pier p." -- --- 2.4. Examples This section presents some examples of English produced by the generator. The first example is a preliminary specification for a harbor manager. (The reader is not expected to understand the Gist specifications before reading their English paraphrases.) 406 begin type port(Pier I ier ::unique); type pier(Handle P type slip0 ; cargo, Slip I slip ::unique); type ship(Carry berth cargo, Destination I port, slip :optional ::optional); type ---go0 ; agent manager(Port I port :unique ::unique) where action JkloveShip[s I ship, p I pier] precondition s :berth ::Slip ::Pier= manager :Port precondition manager :Port :Pier-=p definition update :berth of s to p:Slip; action LoadShip[s I ship, c I cargo] precondition s :berth ::Slip :Handle=c precondition s :berth ::Slip ::Pier= manager :Port definition insert s :Carry=c; action AssignCargo[c I cargo, p I port] definition LoadShip[p ::Destination, c] end end To create an English description for this spec, the specifier had to inform the generator that the attribute relations Carry and Handle should be translated as verbs, ard he had to indicate the appropriate-.cases for each of the parameters of the action declarations. The English that resulted appears below: There are ports, ships, cargos and managers. Each port has one pier. Each pier belongs to one port. Each pier handles one cargo and has one slip. Each slip belongs to one pier. Each ship carries one cargo, has one destination which is a port and may have a berth which is a slip. A slip optionally is the berth of a ship. Each manager has one port. Each port belongs to one manager. A manager can assign a cargo, load a ship or move a ship. To m3ve a ship s to a pier p: Action: The berth of the ship s is updated to the slip of the pier p. Preconditions: The pier of the manager's port must be the pier p. The slip of the pier of the manager's port must be the berth of the ship s. To load a cargo c on a ship s: Action: Assert: The ship s carries the cargo c. Preconditions: The slip of the pier of the manager's port must be the berth of the ship s. The pier that has the berth of the ship s must handle thecargoc. To assign a cargo c to a port p: Action: Ioad the cargo c on a ship whose destination is the port p. When the person who wrote this specification saw the English description of it, he immediately realized that he had made a mistake, because ports should have more than one pier and piers should have more than one slip. This mistake had been hidden in the Gist spec because Gist defaults the mapping for an attribute relation to unique in the forward direction (i.e. from ports to piers and piers to slips). After correcting those bugs and making scme additions, a new specification and English description were produced: begin type port(Pier I harbor type pier(Handle f ier :multiple ::unique, ship :any ::optional); cargo :multiple, Slip I slip :multiple ::unique); type slip0 ; type ship(Carry cargo :any,Destination 1 port, berth slip :optional ::optional); always required Berths Are In Ports for all s 1 ship 11 - - - s :berth=$=>s ~:harbor :Pier :Slip=s : berth; type cargo()optional supertype of<grain(); fuel()> ; always prohibited Fuel And Grain there exists s I shz, g7 grain, f 1 fuel I I s :CarryT and s :Carry=f ; agent manager(Port 1 port :unique ::unique) where action MoveShip[s 1 ship, p I pier] precondition s ::harbor=manager :Port precondition manager :Port :Pier=p definition update :berth of s to p :Slip; action LoadShip[s I ship, c I cargo] precondition s:berth::Slip:Handle=c precondition s ::harbor=manager :Port definition insert s :Carry==c; action AssignCargo[c I cargo, p 1 ,port] definition LoadShip[p::Ikstination,c] end end The English description for the above spec: (Comments appear underlined.) --~- (The generator "sets the stage" by first -- creating a summary statement of the toplevel types that will be descri&k,- =Ei types are tho~thZ&YiXZther subtype? z part-of sane other type.) --- There are ports, ships, cargos and managers. Each port has multiple piers. Each pier belongs to one port. Each port harbors any number of ships. Each ship may be harbored by a port. (For each type, the generator constructs a description of its attribute relations. Note -- the change frcan the previous spec.) -- Each pier handles multiple carqos and has multiple slips. Each slip belongs to one pier. Each ship carries any number of cargos, has one destination which is a port and may have a berth which is a slip. A slip optionally is the berth of a ship. 407 (Wherever possible, the second pass of the -- generator conjoins sentences. Before pass 2, the above paragraph contained four sentences. mepass 2 the first three have been ---- conjoined into one.) Fuels and grainsare cargos. Each manager has one port. Each port belongs to one manager. A manager can move a ship, load a ship or assign a cargo. (The generator gives a s&y description of the actions an agent can perform before descr lb1 'ngthcin detail7 --- To move a ship s to a pier p: Action: The berth of the ship s is updated to a slip of the pier p. Preconditions: The pier p must be a pier of the manager's port. The manager#s port must harbor the ship S. To load a cargo c on a ship s: Action: Assert: The ship s carries the cargo c. Preconditions: The manager's port must harbor the ship s. The pier that has the berth of the ship s must handle the cargo c. To assign a cargo c to a port p: Action: Mad the cargo c on a ship whose destination is the port p. (In describing actions, preconditions d=cribed after the actions, because ~ - represent detailed --iGi description San%: action itself ) ----- are % - Fuel And Grain: A ship s must not carry a grain g and a fuel f. Berths Are In Ports: If: A ship s has any berth, Then: A port p harbors the ship s. The berth of the ship s is a slip of a pier of P. 3. Current Research Issues While the generation capability described above has already demonstrated its usefulness in making Gist specifications more readable, there is much that can be done to improve it. There are four topics we are currently investigating which we expect will substantially improve the quality of the explanations that can be offered. These are: global- explanation descriptions, presentational form, level of abstraction, and sy&olic execution. -- One problem with the current English generator is that it makes its decisions based almost entirely on local information. That is, when translating a piece of a specification to English, decisions about hm that translation should be made depend just on the par titular piece of specification. Operations such as user modelling, choosing appropriate names for objects, and producing focused explanations which describe a subpart of the specification in relation to the rest all require a more global view of the explanation: the explanation itself must be viewed as a whole ati manipulated before being presented. Just as the use of a case granmar provides the English generator with an intermediate representation which is more appropriate for operations such as conjunction insertion that require a more global view than surface syntax provides, a global explanation description is required for the kinds of operations mentioned above. Currently, explanations are only available in one presentational form: English text. Yet text is often not the clearest way of presenting an explanation. For example, most machine-produced English explanations of highly interconnected structures (such as a causal network) became rapidly confusing. The same information is substantially clearer when illustrated by a drawing. This suggests that an explainer will benefit frcm an ability to inter-mix multiple presentation forms, choosing the most appropriate one given the nature of the information to be presented and kmJwledge of the capabilities and preferences of the user. A preliminary graphic capability has been designed for Gist and is currently being implemented. This will all- the user to display, enter, and modify same of the information in a Gist specification. The next stage will be to integrate this capability with the English generator ard a set of heuristics for choosing the most appropriate form, so that the explainer will be able to integrate graphic and text explanations. It is generally agreed that to give good explanations, it is necessary to be able to sumnarize the information to be presented so that the listener is not overwhelmed by detail. The current generator has a limited ability to summarize. For example, the actions an agent can perform are presented in an overview before they are described in detail. While such Gist-based heuristics can be valuable, they will probably not be powerful enough to solve the sumnarization problem by themselves. The problem is that such heuristics only examine the final version of the specification and frequently, there is not sufficient information available to determine appropriate sumnarizations. A record of how the specification itself was developed would be very valuable, because it could detail how the final specification was elaborated fran a more abstract initial specification and give the rationale behind those elaborations. This record could be used both in determining summarizations and in justifying the specification. The Gist language itself has no special features for representing these different levels of abstraction. We are currently designing a system for incrementally acquiring specifications fram the specification writer. This system will allcrw the writer to initially give a very high- level, abstract specification. This initial description will usually be incomplete. The initial specification will be repeatedly elaborated until it is as detailed as required. This process will be recorded. The record should give the explainer a needed additional source of knowledge for providing good explanations. Finally, we wish to be able to explain the dynamic behavior implied by a specification. Often, subtle aspects of a program's (or specification's) behavior only became apparent by executing it. A symbolic evaluator has been designed and partially implemented by Don Cohen [3] which allows the user to evaluate the specification with s*lic inputs, rather than with specific concrete inputs (although the user may supply concrete inputs if desired). In this way, the user can test a specification on a whole class of inputs at once, rather than laboriously iterating over all the possible instances of that class. While the general problem of symbolic evaluation is very hard (ccmplex loops, for example, present behaviors that can be very difficult to express in closed form) we have found that most of the forms which actually arise in specifications are relatively easy to deal with. Those few forms that present problems can then be evaluated concretely. The output of this evaluator is a trace which characterizes the implied behaviors of the specification. The trace is unreadable by people because it is too detailed and unfocused. A dynamic explanation capability is being designed to translate this trace into a more readable form. 1. Balzer, R. M., and N. M. Goldman. Principles of good software specification and their implications for specification languages. Proceedings of the Specifications of Reliable Software Conference, Boston, Massachusetts, April, 1979, pp. 58-67. (Also presented at the National Computer Conference, 1981.) 2. Balzer, R., Goldman, N. & Wile, D. Operational specification as the basis for rapid prototyping. Proceedings of the Second Software Engineering Symposium: Workshop on Rapid Prototyping, ACM SIGSOET, April, 1982. 3. Cohen, D., Swartout, W. & Balzer, R. Using symbolic execution to characterize behavior. Proceedings of the Second Software Engineering Symposium: Workshop on Rapid Prototyping, ACM SIGSOET, April, 1982. 4. Fillmore, C. The Case for Case. In Universals in Linguistic Theory, 7-- Holt, Rinehart and Winston, 1968. 5. Katz, B. A Three-Step Procedure for Language Generation. Tech. Rept. AI Memo 599, MIT, December, 1980. 6. London, P. & Feather, M.S. Implementing specification freedoms. Tech. Rept. RR-81-100, ISI, 4676 Admiralty Way, Marina de1 Rey, CA 90291, 1981. Submitted to Science of Computer Programming. 7. Mann, W.C.,M. Bates,B. Grosz, D. McDonald,K. McKeown, W. Swartout. Text Generation: The State of the Art and the Literature. Tech. Rept. RR-81-101, ISI, December, 1981. 409
1982
56
152
ARGUMENT MOLECULES: A FUNCTIONAL REPRESENTATION OF ARGUMENT STRUCTURE Lawrence Birnbaum Yale University Department of Computer Science New Haven, Connecticut Understanding an utterance in an argument crucially requires determining the evidential relations it bears to prior and subsequent propositions in the argument (Birnbaum et al., 1980; Cohen, 1981). The memory representation of an argument should, accordingly, indicate which propositions a given proposition counts as evidence for (a support relation) or against (an attack relation), and which propositions support or attack it in turn. The representation of an argument can thus be viewed as a network of propositions connected by support or attack relations (an argument graph). Although this sort of representation can be motivated simply by the need to represent the content of an argument, it seems natural to ask whether such argument graphs might further possess any useful structural properties, abstracted from the specific propositions they relate. Identifying “useful” structural properties depends, of course, on having some notion of what uses they might have. In ,a process model of argumentation, a structural property of the representation of an argument is useful if (and only if) it serves some functional role in understanding or generating utterances in the argument. For example, it seems likely that some structural features of the argument would play a role in determining how an utterance relates, via support or attack links, to other propositions in the argument (Flowers et al., 1982; Reichman, 1981). That is, a representation of argument structure should provide expectations as to which prior propositions, if any, a given input would be likely to attack, support, or be supported by. Such information would potentially be useful in reducing the number of prior propositions with which an input must be compared to determine whether or not an evidential relation holds. A representation of argument structure should also help identify those propositions in an argument which are likely candidates for an arguer himself to attack or support, and thus play a role in planning a rebuttal. Explicit planning of this sort is not as central as might at first be supposed, however, because engaging in an argument is an opportunistic process (McGuire et al., 1981), in which a good rebuttal to an input can often be discovered as a side-effect of the memory processing that is required simply in order to understand that input. One implication of opportunistic processing is that if a direct attack on an input is not found at understanding time, it will most likely be difficult to find one. In these cases, better chances for producing a good rebuttal probably lie elsewhere. Thus, a key function of explicit rebuttal planning, and hence of any structural considerations that enter into such planning, must be to focus attention on other points of possible contention in the argument when no direct attack on the input is discovered opportunistically. One way to investigate argument structure is to attempt to identify commonly occurring patterns of support .and attack relations that encompass several propositions. Because of their relatively fiied structure, these argument molecules (Flowers et at., 1982) can be used to specify which propositions, among those contained in the molecule, are worth trying to attack or support. They can thus be used both to help plan rebuttals, and to generate expectations about an opponent’s possible rebuttals. Two kinds of molecules have been identified thus far. The rest of the paper will illustrate the use of one of them in understanding and rebutting. (Th e other is described in Flowers et at., 1982.) A stand-off is an argument molecule involving a kind of reductio ad absurdum argument to attack an opponent’s use of a plausible inference rule in support of one of his points. The attack is effected by showing that the opponent’s reasoning can also be used to support some proposition that he cannot accept. For example, consider the following exchange in a mock argument between an Arab and an Israeli over Middle East affairs: [I] Israeli: Israel can’t negotiate with the PLO because they don’t even recognize Israel’s right to exist. 63 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. [2] Arab: Israel doesn’t recognize the PLO either. The Israeli’s utterance [l) is clearly an attack of some kind, but what exactly is being attacked? The explicit content of the utterance is that the PLO’s failure to recognize Israel is blocking negotiations, and this in turn is based on the assumption that recognition of some sort is always a prerequisite for negotiations. However, this explicit content by itself does not attack anything. The force of the utterance stems instead from an implicit appeal to the notion of responsibility, using an interpretation rule which can informally be stated as follows: Responsibility atttibution: If an actor performs some action (or refrains from performing some action) which causes (prevents) some state of affairs, then that actor is responsible for causing (blocking) that state of affairs, unless its action (failure to act) was justified by some previous state of affairs. The point of the Israeli’s utterance, therefore, is that the PLO is responsible for blocking negotiations -- which is a personal attack (Flowers et al., 1982) on the Arab position. The explicit content of the utterance counts as evidence in support of this claim, by appeal to the responsibility attribution rule. (Carbonell, 1981, has investigated the role of other interpretation rules of this sort in ideology-based reasoning about political events.) Thus, the proper representation of utterance [l] consists of the following fragment of argument graph: [la] The PLO is responsible for blocking negotiations. support: Responsibility / \ attribution / \ / \ [lb] The PLO's failure [lcl The PLO's failure to recognize Israel to recognize Israel blocks negotiations. is not justified. Proposition [lc] represents the “unless” clause of the responsibility attribution rule. (This analysis of support relations is similar to that proposed by Toulmin, 1958.) Now, the point of the Arab’s response [2] is that, by using the same reasoning that the Israeli invokes in [l], Israel as well can be held responsible for blocking negotiations. Presumably, an Israeli would find this proposition as unacceptable as an Arab would find the Israeli’s original claim. Thus, if the Arab is successful, neither disputant can use this line of reasoning without hurting his own position -- which is why this molecule is called a stand-off. The argument graph for the exchange has the following structure: [la] The PLO is [2al Israel is responsible for responsible for blocking negotiations. blocking negotiations. t t I I support <------- attack ------> support I I [lb] The PLO's failure [2b] Israel's failure to recognize Israel to recognize the PLO blocks negotiations. blocks negotiations. [lc] The PLO's failure [2cl Israel's failure to recognize Israel to recognize the PLO is not justified. is not justified. The utility of this structure in planning a rebuttal becomes clear when we examine the Israeli’s options in response to [2]. One possibility is to attack the basis of the Ara.b’s argument, in this case [2b] or [2c], for example by arguing that Israel does in fact recognize the PLO. However, if such an attack were possible, it would presumably be discovered opportunistically at understanding time. If so, there is no need to plan a response; if not, then the possibility of attacking [2b] or [2c] seems remote, and so should not be attempted. An attack on the Arab’s claim [2a], that Israel is responsible for blocking negotiations, would be pointless, because the thrust of the Arab’s argument is not that [2a] must be accepted, but that [2a] must be accepted if [la] is. An attack on the support relation between between [2b] and [2a] would entail an attack on the responsibility attribution rule, which would be fatal to the Israeli’s own claim that [lb] supports [la]. And finally, there is clearly no point in trying to support [lb] or [ICI, e.g., by proving that the PLO does indeed refuse to recognize Israel, because the Arab did not dispute it. Thus, within the scope of this stand-off molecule, the Israeli actually has only one option if no rebuttai arises opportunistically: he can attempt to re-support his claim [la], that the PLO is responsible for blocking negotiations, using different evidence. Failing that, he can of course either concede the point or change the subject. The stand-off molecule can play a similar role in the process of understanding utterances in an argument. Consider the following continuation of the above exchange: [3] Israeli: But the PLO is just a bunch of terrorists. This response can be accounted for by opportunism, because in the course of trying to understand [2], the Israeli must relate [2c], the claim that Israel’s failure to recognize the PLO is not justified, to memory. In doing so, he will discover his belief that Israel’s failure to recognize the PLO is justified, by virtue of the fact that the PLO is a terrorist organization. Because this response is simply an attack on the Arab’s previous utterance, a possibility which should always be expected, it might seem that the stand-off molecule has no role to play in determining how it relates to the argument graph. This is not the case, however, because the representation of the Arab’s utterance [2], when its implications are understood, consists of three distinct propositions, plus a support link. From the above analysis of the Israeli’s options for rebuttal, it immediately follows that [3] cannot attack [2a] or the support link: only [2b] and [2c] are possible targets. In order to employ the constraints associated with argument molecules to understand or produce responses, there remains the problem of recognizing such structures when they arise in an argument. If it turns out that there is a relatively small number of molecules, then the most straightforward method would simply be to attempt to match the templates of all of them against the most recent portion of the argument graph. A more interesting approach would involve identifying features of an utterance or an exchange which might typically signal some molecule. For example, an utterance of the form “But X does Y also” is probably a good indication of a stand-off molecule. The examples presented here show that an argument molecule can be extremely useful in focusing attention on relevant propositions in the argument graph, both for understanding and rebutting. In essence, a molecule packages knowledge about the logical structure of an argument fragment in a way that makes explicit which potential responses would have some logical force, and which would not. We can conclude that arguments do indeed possess useful structural properties, abstracted from the specific propositions they encompass. Acknowledgments: Many of the ideas in this paper grew out of discussions with Rod McGuire. Mark Burstein and Steven Salzberg provided useful comments on an earlier draft. This work was supported in part by the Defense Advanced Research Projects Agency, monitored by the Office of Naval Research under contract NO001475- C-1111, and in part by the National Science Foundation under grant IST7918463. References Birnbaum, L., Flowers, M., and McGuire, R. 1980. Towards an AI model of argumentation. In Proceedings of the First AAAI Conference, Stanford, CA, pp. 313-315. Carbonell, J. 1981. Subjective Understanding: Computer Models of Belief Systems. UMI Research Press, Ann Arbor, MI. Cohen, R. 1981. Investigation of processing strategies for the structural analysis of arguments. In Proceedings of the 19th ACL Conference, Stanford, CA, pp. 71-75. Flowers, M., McGuire, R., and Birnbaum, L. 1982. Adversary arguments and the logic of personal attacks. In W. Lehnert and M. Ringle, eds., Strategies for Naturut Language Processing, Lawrence Erlbaum, Hillsdale, NJ, pp. 275-294. McGuire, R., Birnbaum, L., and Flowers, M. 1981. Opportunistic processing in arguments. In Proceedings of the Seventh IJCAI, Vancouver, B.C., pp. 58-60. Reichman, R. 1981. Modeling informal debates. In Proceedings of the Seventh IJCAI, Vancouver, B.C., pp. 19-24. Toulmin, S. 1958. The Uses of Argument. Cambridge University Press, Cambridge, England. 65
1982
57
153
ARGOT: The Rochester Dialogue System James 1:. Allen, Alan M. Frisch, and Diane J. 1 ,&man Computer Science Department The University of Rochester Kochester, NY 14627 Abstract We are engaged in a long-term research project that has the ultimate aim of describing a mechanism that can partake in an extended English dialogue on some reasonably well specified range of topics. This paper is a progress report on the project, called ARGOT. It outlines the system and describes recent results as well as work in progress. 1. Introduction Consider Dialogue 1, a slightly cleaned up version of an actual dialogue between a computer operator and a user communicating via terminals. (1) User: Could you mount a rnagtapc for mc? (2) It’s T376. (3) No ring please. (4) Can you do it in five minutes? (5) Operator: We are not allowed to mount that magtape. (6) You will have to talk to the head operator about It. (7) User: How about tape T241? Dialogue 1. We are building a computer system called ARGOT that plays the role of the operator in extended dialogues such as the above. This dialogue illustrates some of the many issues that must be addressesed in building such a system. For instance, the fust utterance taken literally is a query about the system’s (i.e., the operator’s) abilities. In this dialogue, however, the user intends it as part of a request to mount a particular magtape. Thus, the system must recognize an indirect speech act. Utterance (2) identifies the tape in question, and (3) and (4) add constraints on how the requested mounting is supposed to be done. These four utterances, taken as a unit, can be summarized as a single request to mount a particular magtape with no ring within five minutes. Once the system makes the above inferences, it generates (5), which denies the request, as well as (6), which provides additional information that may be helpful to the user. The system believes that talking to the head operator will be of use to the user because it has recognized the user’s goal of getting a tape mounted. Utterance (7) taken in isolation is meaningless; however, in the context of the entire dialogue, it can be seen as an attempt to modify the original request by respecifying the tape to be mounted. Allen’s [1979] model of language as cooperative behavior provides answers to several of the difficulties suggested by Dialogue 1. The basic assumption of that approach, which is adopted in AKGOT, is that the participants in a dialogue are conversing in order to achieve certain goals. As a consequence, a major part of understanding what someone said is recognizing what goals they are pursuing. In purposeful dialogues this model accounts for helpful responses, as well as for responses to indirect speech acts and some sentence fragments. However, since his model has no knowledge of discourse structure it cannot partake in an extended dialogue. One of the major advances made in ARGOT is that it recognizes multiple goals underlying utterances. For example, consider the user’s goals underlying utterance (2). From the point of view of the task domain, the user’s goal is to get the tape mounted (by means of identifying it). From the point of view of the dialogue, the user’s goal is to elaborate on a previous requests, i.e. the user is specifying the value of a parameter in the plan that was recognized from the first utterance, In the ARGOT system, we recognize both these goals and are investigating the relationship between them. The need for this type of analysis has been pointed out by many researchers (e.g., [Levy, 1979: Grosz, 1979; Appelt, 1981; and Johnson and Robertson, 19811). From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. 2. Organization of AHGO’ Currently, the AKGOT system is divided into many subsystems, each running concurrently. The three subsystems we shall consider in this paper are the task goal reasoner, the conmunicative goal reasoner, and the linguistic reasoner. Each of these levels is intended to perform both recognition and generation. In this paper we consider only recognition, since the generative side of the system is not currently being implemented. The task goal reasoner recognizes goals in the domain of discourse, such as mounting tapes, reading files, etc. The communicative goal reasoner recognizes goals such as introducing a topic, clarifying or elaborating on aa previous utterance, modifying the current topic, etc. Allen’s earlier system had parts of both types of analysis but they collapsed into one level. A result of this was that it was difficult to incorporate knowledge of the dialogue structure into the analysis. Splitting the analysis of intention into the communicative and task levels brings about the problem of identifying and relating the high-level goals of the plans at each level. The high-level goals at the task level are dependent on the domain, and correspond to the high- level goals in the earlier model. The high-level communicative goals reflect the structure of English dialogue and are used ti input to the task level reasoner. In other words, these goals specify some operation (e.g., introduce goal, specify parameter) that indicates how the task level plan is to be manipulated. Our initial high-level communicative goals are based on the work of Mann, Moore and Levin [1977]. In their model, conveuations are analyzed in terms of the ways in which language is used to achieve goals in the task domain. For example, bidding a goal is a communicative action which introduces a task goal for adoption by the hearer. Given the communicative goals, we must now be able to recognize plans at this level. Neither Mann et al. [1977] nor Reichman [1978] have described in detail the process of recognizing the communicative goals from actual utterances. Currently, we adapt Allen’s [1979] recognition algorithm, which finds an inference path connecting the observed linguistic action(s) to an expected communicative goal. This algorithm uses the representation of the utterance from the linguistic level and a set of possible communicative acts predicted by a dialogue grammar which indicates what communicative acts are allowed ‘at any particular time for both participants, and is modeled after Horrigan [1977]. The work at SRI [Walker, 19781 in expert-apprentice dialogues monitored the goals of the user at the task level. The only analysis at the communicative goal level was implicit in various mechanisms such as the focusing of attention [Grosz, 19781. Their work ties the task structure and communicative structure too closely together for our purposes. Appelt [1981] also views utterances as actions which satisfy goals along various explicit dimensions--a social dimension as well as what would correspond to our task and communicative levels. However, his communicative dimension is again mainly concerned with focusing. The linguistic level is responsible for providing input to the other levels of analysis that reflects the content of the actual utterances. This parser will be based on the Word Expert Psrser system of Small and Reiger [1981]. As the linguistic analysis progresses, it will notify the other levels of the various noun phrases that appear as they are analyzed. This allows the other levels to start analyzing the speaker’s intentions before the entire sentence is linguistically analyzed. Thus, an interpretation may be found even if the linguistic analysis eventually “fails” to find a complete sentence. (I’ailure is not quite the correct word here, since if the utterance is understood, whether it WaS “correct” or not becomes uninteresting.) We are investigating other information that could be useful for the rest of the system during parsing; for instance, the recognition of clue words to the discourse structure [Reichman, 19781. If a user utterance contains the word “please,” the communicative level should be notified SO that it can generate an expectation for a request. In addition, the rest of the system may be able to provide strong enough expectations about the content of the utterance that the linguistic level is able to construct a plausible analysis of what was said, even for some ungrammatical sentences. 3. Issues irr Knowledge Representation All levels of analysis in AKGOT make use of a common knowledge representation and a common knowledge base module (KB). The KU stores a set of sentences of the representation language and provides retrieval facilities for accessing them. These retrieval facilities are used extensively at all levels of analysis. Because of this, the sentences stored in the KB not only represent the knowledge associated with magtapes and the doings of a computer room, but also the knowledge necessary to deal with language. This includes knowledge of physical, mental, and linguistic actions, how these actions are involved in plans, what computer users do, and what they expect of computer operators. 67 Following Brachman [1979b], our representation is constructed of two levels: the epistemological level and the conceptual level. The epistemological level provides a set of knowledge structuring primitives that are used to construct all conceptual level entities (e.g., action, time, and belief). Each level of the representation provides * primitive symbols (particular predicate symbols, function symbols, and constant symbols) which can then be combined using the notation of FOPC. By inheriting the logical connectives and quantificational structure of FOPC, the resulting representation language is quite expressive. 3.1 The Episternological Level of Representation The epistemological level of the representation supplies a fixed set of predicates which are the knowledge- structuring primitives out of which all representations are built. The choice of knowledge-structuring primitives has been motivated by the study of semantic networks. For instance, where a semantic network such as Brachman [1979b] might have Agent Object c3 t-2 - Operator T376 we would have SUBTYPE(Tape-Mountmgs,Events) TYPE(Operator-Mounting-T376,Tape-Mountings) ROL,E(Operator-Mounting-T376,Agent,Opcrator) ROLE(Operator-Mounting-1‘376,0bject,T376) Notice that the SUBTYPE predicate corresponds lo tie unshaded double arrow, the TYPE predicate to the shaded double arrow, and the ROLE predicate to the single arrow. Our constants are sorted into individuals (e.g.: Operator- Mounting-T376, Operator, ‘1’376), types (e.g.: Events, Tape- Mountings) and rolenames (e.g.: Agent, Object). These sorts somewhat correspond to the shaded oval, unshaded oval and shaded box of the network. Allen and Friseh [1982] have fully described and axiomatized the epistemological level of the representation and have compared it to semantic networks. 3.2 The Conceptual Level of Representation The representation of actions is crucial to a dialogue participant for two reasons. The first is that the participant must be able to represent the meaning of utterances that refer to actions (e.g., “Can you mount a magtape for me?” refers to the action of mounting). The second, as previously discussed, is that it is advantageous to model the language comprehension and production processes as purposeful, planned action (e.g., uttering “Can you mount a magtape for me?” is a requesting action). However, existing models of action, most notably the state-space approach (e.g. [Fikes and Nilsson, 19711) appear inadequate for the above purposes. Since a major deficiency with the existipg models is an inadequate treatment of time, we first turn our attention to this issue. An interval-based temporal logic and its associated inference processes have been defined [Allen, 1981a]. Kather than using a global time line, the representation employs a hierarchical set of reference frames. A particular interval is known by its location relative to the reference frames and other intervals. This is particularly important in a dialogue system for most temporal knowledge does not have a precise time. This representation of time has been used to produce a general model of events and actions [Allen, 1981b]. The occurence of an event corresponds to a partial description of the world over some time interval. Actions are defined as that subclass of events that are caused by agents. This is in contrast to the state-space view of an action as a function from one world state to a succeeding world state. Our approach enables the representation of actions that describe inactivity (e.g., standing still), preserving a state (e.g. preventing your television from being stolen), and simultaneous performance of simpler actions (e.g., talking while juggling). Kepresenting actions, particularly speech acts, requires the representation of beliefs. For example, the effect of the speech act of informing involves changing the beliefs of the hearer. A model of belief has been developed that treats BELIIW as a predicate on an agent and a sentence of the representation language. To do this, there must be a name for every sentence in the language. Perlis [1981] and Haas [1982] have introduced naming schemes that provide enough expressiveness to deal with traditional representational requirements such as quantifying in. Haas [1982] has used this formulation of belief to predict an agent’s action by constructing plans that can include mental actions. His treatment of belief and action does not suffer from the problem of the possible worlds approach [Moore, 19791 that an agent believes all consequences of his beliefs. 3.3 The Knowledge Base Module The Knowledge Base (KB) provides a set of retrieval facilities that is the sole access that the system has to the sentences stored in the KB. This retrieval facility corresponds to the matcher in a semantic network representation. Since retrieval must respect the semantics of the representation, it is viewed as inference. However, this inference must be limited because retrieval must terminate, and must do so in a reasonable amount of time. Frisch and Allen [1982] have shown how a limited inference engine suitable for knowledge retrieval can be given a formal, non-procedural specification in a meta- language and how such a specification can be efficiently implemented. The capabilities and limitations of the retriever can be thought of intuitively as follows. A set of axioms dealing solely with the epistemological primitives is built into the retriever. For example, three of these axioms are: V tl,t2,t3 SUBTYPE(tl,t2) A SUBTYPE(t2,t3) + SUBTYPE(tlJ3) (SUBTYPE is transitive.) w o,tp2 ‘I’YPE(o,t$ A SUBTYPE(t~J2) + ‘l’YP~(o,t2) (Every member of a given type is a member of its supertypes.) W x,r,y,y’ ROLE(x,r,y) A ROLE(x,r,y’) + y=y’ (Role fillers are unique) Through these built-in axioms, the retriever “knows about” all of the episemological primitives. The retriever’s power comes from the fact that it can, for the most part, reason completely with the built-in axioms. Its limitations arise because it only partially reasons with the sentences stored in the KB. The retriever also has knowledge of how to control inferences with the built-in axioms. In this manner, the retriever only performs those inferences for which it has adequate control knowledge to perform efficiently. 4. A Shnple Example Let us trace a simplified analysis of utterance (1) “Could you mount a magtape for me?” The communicative acts expected at the start of a dialogue by the grammar are (in an informal notation) user BID-GOAL to system, ‘and user SUMMON system. Taking the utterance literally, the linguistic level uses both syntactic and semantic analysis to identify the linguistic actions (speech acts) performed by the speaker. For utterance (1) we have user REQUEST that System INFORM user if systetn can InOuIlt a tape, which is sent to the communicative level. The plan recognition algorithm produces BID-GOAL acts for two possible goals: (G.l) system INFORM user if system can mount a tape (literal interpretation) (G.2) system MOUNT a tape (indirect interpretation). The indirect interpretation, (G.2), is favored, illustrating how goal plausibility depends upon what the dialogue participants know and believe, Most people know that operators can mount tapes, so the literal interpretation is unlikely. However, if the user did not know this, the literal interpretation would also have been recognized (i.e., the system might generate “yes” before attempting to mount the tape). It is important to remember here that the plan was recognized starting from the literal interpretation of the utterance. The indirect interpretation falls out of the plan analysis (see [Perrault and Allen, 19801 for more details). Thus, the linguistic level only needs to produce a literal analysis. The communicative level sends the recognized BID- GOAL, (G.2) to the task reasoner. There, the user’s task level goal to mount a tape is recognized, and the system accepts the user’s goal as a goal of its own. Of course, since the task level reasoner is a general plan recognizer as well, it may well infer beyond the immediate effect of the specific communicative action. For example, it may infer that the user has the higher-level goal of reading a file. The task level reasoner generates a plan for mounting a tape and then inspects this plan for obstacles. Assuming the user says nothing further, there would be .an obstacle in the task plan, for the system would not know which tape to mount. The task level reasoner would generate the goal for the system to identify the tape and would send this goal to the communicative goal reasoner. This reasoner would plan a speech act (or acts), obeying the constraints on well-formed discourse, that could lead to accomplishing the goal of identifying the tape. This speech act then would be sent to the linguistic level which would generate a response such as “Which tape?” In Dialogue 1, however, the user identifies the tape in utterance (2), which the communicative level recognizes as a SPECIE‘Y-PAHAME?‘EKl’~I~ action for the plan created by the initial UID-GOAL action. 5. Current State and lhture Directions We have implemented the knowledge base, a simple dialogue grammar, and simple plan recogniLers at both the communicative and task levels. Furthermore, we are currently incorporating a word-expert parser [Small and Kieger, 19811. As discussed in the previous sections, further research on all aspects of ARGOT (i.e. the levels, the interactions between them, and the theoretical models) is still needed. Acknowledgements We would like to thank Bill Matm for providing us with the dialogues. We thank Lokendra Shastri and Marc Valain for their contributions to the development of ARGOT and Dan Russell for his helpful comments on an earlier version of this paper. This work has been supported in part by NSF Grant ET-8012418 and DARPA Grant N00014-82-K-0193. Hefereuces Allen, J.F., “A plarl-based approach to speech act recognition,” Ph.D. thesis, Computer Science Dept., U. ‘roroIlto. 1979. Allen, J.F., “An interval- based representation of temporal knowledge,” Yroc., 7th Int’l. Joint Conf. on Artificial Intelligence, Vancouver, 13.C., 1981a. Allen, J.F., “What’s necessary to hide?: Reasoning about action verbs,” Proc., 19th Annual Meeting, Assoc. for Computational Linguistics, 77-81, Stanford U., 1981b. Allen, J.F. and A.M. Frisch, “What’s in a semantic network?” hoc., 20th Annual Meeting, Assoc. for Computatronal Linguistics, U. Toronto, June, 1982. Awelt, D., “Planning natural language utterances to satisfy multiple goals,” Ph.D. thesis, Computer Science Dept., Stanford U., 1981. Brachman, R.J., “On the epistemological status of semantic networks,” in N.V. Findler (Ed.). Associative Networks. New York: Academic Press, 1979b. 1:ikes. R.13. and N.J. Nilsson, “STKIPS: A new approach to the application of theorem proving to problem solving,” Artificial Intelligence, 2, 189-205, 1971. I:risch, A.M. and J.I:. Allen, “Knowledge retrieval as limited inference,” Lecture Notes on Computer Science: 6th Conference on Automated Deduction Proceedit lgs. New York: Springer-Verlag, 1982. Grosz, B.J., “Discourse knowledge,” In [Walker, 19781. Grosz, B.J., “Utterance and objective: issues in natural language communication,” hoc., 6th Int’l. Joint Conf. on Artificial Intelligence, Tokyo, 1979. Haas, A.R., “Mental states and mental actions in planning,” Ph.D. thesis, Computer Science Dept., U. Rochester, 1982. Horrigan. M.K., “Modelling simple dialogues,” Yroc.. 5th Int’l. Joint Conf. on Artificial Intelligence, MIT, 1977. Johnson, P.N. and S.P. Robertson, “MAGPIII: A goal- based model of conversation,” Research Report #206, Computer Science Dept., Yale U., May 1981. I,evy, D.. “Communicative goals and strategies: between discourse and syntax,” in T. Givon (ed.). Syntax and Semantics, Vol. 12. New York: Academic Press, 1979. Mann, W.C., J.A. Moore, and J.A. Levin, “A comprehension model for human dialogue,” Proc., 5 th Int’l. Joint Conf. on Artificial Intelligence, MIT, 1977. Moore, R.C., “Reasoning about knowledge and action,” Ph.1). thesis, MIT, 1979. Perlis I>., “I anguage, computation, and reality,” Ph.D. thesis, Computer Science Dept., U. Rochester, 1981. Perrault, C.R. and J.F. Allen, “A plan-based analysis of indirect speech acts, ” J. Assoc. Comp’l. Linguistics 6, 3, 1980. Reichman, R., “Conversational coherency,” Cognitive Science 2, 1978. Small, S.I.. and C. Rieger, “Parsing and comprehending with word experts (a theory and its realization)“, TR 1039, Dept. Computer Science, U. Maryland, 1981. Walker, I).E. Understunding Spoken Language. New York: North-Holland, 1978. 70
1982
58
154
Dividing Up The Question Answering Process* Marc Luti Division of Computer Science Department of EECS University of California, Berkeley Berkeley, Ca. 94720 Abstract This paper describes a question answer- ing program which divides question answer- ing into two separate processes: answer for- mation and answer expression. Rather than gathering possible answers and choosing the best among them, the program accesses the database and finds all components of possi- ble answers, e.g. a causal chain, and then passes this information to an expression pro- gram which formulates a proper answer. 1. Introduction I have developed a question answering program that will answer questions about simple stories. In my program, question- answering is divided up into two separate processes: 1) answer formation and 2) answer expression. The program first looks down a causal chain which is formed by the story-understanding program and figures out in what part of the chain the answer lies. The answer can also be a subset of the chain, sometimes a quite long one. The second part of the program takes this long chain and decides what things are important to express to the questioner. This answer expresser uses general rules of expression to figure out what it needs to include to make the answer understandable, informative and interesting. This solution is different from other question-answering algorithms (e.g. Wino- grad 19’72, Lehnert 1977) which view ques- tion answering as one process. These pro- grams gather possible answers, and then choose the ‘best’ answer from among them. *This research was sponsored in part by the Office of Naval Research under contract N00014-80-C-0732 and the National Science Foundation under grant MCS79- 06543. My system first gets the chain which I con- sider to be the answer to the question, and then figures out which parts of the chain should be generated into English as the answer. The advantage of my approach is that it allows one to treat the answer as one entity and use the answer expression mechanism to express what people are interested in. The resulting answers are generally more informative and conversa- tionally appropriate than those generated by other algorithms. The program works in conjunction with FAUSTUS, a story understanding program that specialises in goal-based stories. [2,3] After a story is initially ‘read’ by PHRAN, a parser, [ 11 it is then passed to this PAMELA and ‘understood’. The question answering program is passed a database which consists of events, inferences, and most importantly, for my purpose, causal chains which instan- tiate events in the story as steps of particu- lar plans and plans for particular goals. Con- tained in this causal chain is the actual ‘understanding’ of the sequence of events in the story, what caused what, and what goal actors had in mind when they performed a particular act or plan. After a question is asked, this question is parsed by the same parser that parsed the story, and then the answer is formulated by looking at the data- base. Finally, the answer is passed to the answer expresser which sends the answer to a natural language generator. 71 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. 2. Program Examples** The following examples were processed by my program. Story: Susan saved her money from her allowance. One day she rode her bike to the bookstore and bought the book that her teacher had recommended. Susan did very well on her math test the follow- ing week. Ql: Al: Why did Susan buy the book? So that she could study from it and do well on her exam. Q2: A2: How did Susan do so well on her math exam? She bought a book that her teacher had recommended and studied from it. Q3: How did she get the book? A3: By riding her bike to the bookstore. 3. F’inding the Best Answer A difficult and important part of answer- ing a question is not in tiding an answer to the question, but finding the best answer. In a database of causal chains, if one can find an event in the database then there may be many possible answers to a given question. Consider the previous story. If we ask: Q4: Why did Susan buy the book? The following answers are obtained by step- ping down the causal chain. A4a:Because she wanted to have it. A4b:Because she wanted to read it. A4c:Because she wanted to know math. A4d:Because she wanted to do well on her exam. Note that the items nearer the top of the goal structure constitute better answers although the best answer would be some- thing like: **At this point the program is not connected to the natural language parser at Berkeley called PHRA.Y or the generator PHRED (Wilensky and Arens, 1980). The questions and answers are therefore translated from the conceptual form I now use. A4e:So that she could study it and do well on her math test. However, in a more complicated story, merely looking to the end of chain might not work quite as well. For example, if in the previous story we added: She put the book on her head and learned the material through osmosis. Susan did very well on her math test the following week. Clearly, Answer4d is no longer a good answer. One possible solution is including only ‘important’ answers. Important inferences might include abnormal plans, natural disas- ters, etc. The problem with this was that even though these ‘important’ inferences definitely should be included in the answer, one should not necessarily stop at that point in the chain and say that this is the answer. For example, just stopping at ‘important’ events in response to question4 one would get: A4f:So that she could put it put it on her head. A4g:So that she could learn by osmosis. which is less desirable than: A4h:So that she could learn from the math book by osmosis and do well on her exam. 4. Dividing up the Question Answering Process My program is able to find these better answers because of the separation of finding the answer (the subset of the chain) from expressing the answer to the user. Instead I use the two programs: Answer-Formulator: looks down a causal chain, figures out where what parts of the chain are relevant to an answer and returns a chain. Intelligent-Expresser: takes this causal chain as input, figures out from its general rules of expression what is important to say so that the questioner will a) under- stand the answer and b) get the kind of information that people are generally interested in, and outputs to a natural language generator, some intermediate form from which it could generate an answer. . 72 For example, my program would produce answer4e above by the following process. First it would find Susan buying the book in the database and then follow the chain, in this case, to where it finds that she did well on her exam. This whole part of the chain is passed to the expression mechanism which would notices that studying the book and doing well on her exam were important parts of the answer. In this case, the Intelligent- Expresser uses the general conversational rule of not informing someone of something they already know. Having the book and reading the book are thereby eliminated because they are stored in the data base as normative purposes for buying and for hav- ing a book, respectively. This approach also allows one to gen- erate answers that were otherwke prob- lematic to represent in a conceptual form. For example, the simple question: Q5: Did Susan go to the bookstore? A5 Yes, she rode her bike there. The answer is obviously yes, because this event appears in the database. However, ‘yes’ is something that is difficult to represent in conceptual form. ‘Yes’ is not really a concept but rather a word that is almost exclusively used in a conversation. The answer formation part of my system looks in the database for concepts similar to going to the bookstore. Realizing that riding to the bookstore was similar to going there it would answer: (ride (actor (person (object susanl))) (object (bicycle (object bicycle 1))) (destination (bookstore(object bookstorel] This part of the chain and the context in which the question was asked is passed to the answer expression part of the program, that would a) see that this is a simple verify question, b) realize that the concept to be verified was in fact found in the database in a slightly different form and c) figure out that it should answer ‘yes’ plus some inter- mediate form that represents that it should include the ride concept. This same method can be extended to other types of verify questions. For exam- ple Q6: Did Susan ride her bike to the bookstore so that she could do well on her math test? A6: Yes, she bought a book at the bookstore which she used to study for her exam. Q7: Did Susan buy the math book so that she could do well on her math test? A7: Yes, she used it to study for her exam. The answer formation part looks to see if a chain with the starting place of ‘riding to the bookstore’ and ends with ‘doing well on her math test’, exists in the database. This whole chain does exist and includes, she rode to the bookstore was a plan for being at the bookstore, which was a precondition for buying a book, which was a plan for having the book which was a step of reading the book, which was a plan for knowing the math material, which was a goal from doing well on her exam. The answer expression part of the pro- gram gets this chain, realizes it should answer ‘yes’ and decides how much in addi- tion to the ‘yes’ it would need to include in the answer. Notice how in Answer4 it had to include more information from this chain than it had to include in Answer5 5. Conclusion This intelligent expression part of the program is not something that is designed to be used exclusively in question-answering but is a system that would be valuable in any context where an interactive natural language system is important. It differs from a generator in that it does not merely generate something from a conceptual form into English, but rather decides what kinds of things are important to be said, which is then passed to a generator. Hopefully, this kind of system could be expanded to work on other conversational tasks as well. References [l] Lehnert, W., 1978. The Process of Ques- tion Answering: A Computer Simulation of Cognition. Hillsdale, N. J. Lawrence Erlbaum Associates, Inc. [2] Wilensky, R. 1978. Goal-Based Stdtis. Understanding Technical Report 140, Computer Science Department, Yale University, New Haven, CT. 73 [3] Wilensky, R. and Arens, Y. 1980 PHRAN - a Knowledge Based Approach to Natural Language Analysis. University of Cali- fornia at Berkeley. Electronic Research Laboratory Memorandum No. UCB /ERL M80/34. [4] Wilensky, R. 1981. Meta-planning: Representing and using knowledge about planning in problem solving and natural language understanding. Cog- nitive Science, Vol. 5, No. 3. 198 1. [5] Winograd, T. 1972 Understanding Natural Language. New York. Academic Press. 74
1982
59
155
BUILDING EXPERT SYSTEMS FOR CONTROLLING COMPLEX PROGRAMS Sholom Weiss, Casimir Kulikowski, Chidanand Apt& Michael Uschold Department of Computer Science, Rutgers University Jay Patchett, Robert Brigham, Belynda Spitzer Amoco Production Research Abstract Production rule schemes have proven quite effective in concisely representing expert knowledge in several application areas. Yet, there are many problems for which one would like to take advantage of additional knowledge that may not be easily represented in these surface level models. One class of problems of particular practical interest are those in which we would like to have a computer-based system give interactive advice on how to control and interpret results from a set of complex and interrelated applications programs. The advice may refer to interpretations of current results, possible experiments that should be performed with the help of the applications programs, and indications of inconsistencies in specific analytical procedures and in problem solving sequences followed by the user. In the present paper we report on our experiences in designing an expert system (ELASI, of the type described above, for well log analysis in oil exploration and production. We have integrated a production rule advice model (using the EXPERT system) with existing Amoco software for well-log analysis and display. In doing so, the original system for well-log analysis was reorganized so that its use could be monitored and controlled, and its knowledge structured according to the types and sequences of methods used by expert analysts. By varying the assumptions and parameters used in the different individual analyses, our goal is to make available interactive interpretations of the alternative approaches that an expert might take to a complex problem of well-log analysis. I Introduction In a recent article [: 11, Hart describes several of the research issues arising in the design of multi-level expert systems. He contrasts surface level models with deep models of reasoning. By way of example, Hart describes a hypothetical system for advising petroleum engineers using a multi-level approach. The surface level model is of the production rule type, whereas the deep model is a purely mathematical description of an oil reservoir expressed as a set of partial differential equations. The latter is typically implemented as complex software tools, such as reservoir simulators. We have built a multi-level expert system called ELAS (Expert Log Analysis System) for carrying out well-log analysis. Well logs are the various electromagnetic, sonic and nuclear signals obtained from instruments placed down- hole in a well, which characterize the properties of the rock and fluid formations around the borehole. From a practical applications point of view, well log interpretation represents an important problem, since it permits an assessment of the likely presence of hydrocarbons and possible yields of the well during exploration and production. From the perspective of expert systems research, this application is proving very helpful in increasing our understanding of representation, communication and control processes in multi-level systems. And, from the more general software engineering point-of- view, we are learning how one might exploit existing software systems more fully by building a coordinating and advisory system that makes these programs easier to use by a wider variety of expert and non-expert users alike. In many problem areas, it is not unusual to find that valuable software has already been developed to aid the expert in data analysis, the design of experiments, and the interpretation of results. These programs are often quite complex packages, developed over several years and enhanced through extensive user experience. In designing an expert system, it is only natural that one should want to take advantage of such software. One of the first efforts in modeling expert advice on the use of a complex program was the SACON 121 project which developed an advisory model for the MARC structural analysis program. However, there was no interaction between the two programs: SACON was run before the MARC program, giving advice on its prospective use. In order to develop an expert system to its fullest potential, interaction is needed between the advising program and the application programs. In a sophisticated system, the interpretive program will be fully integrated with the application programs, so that they communicate their results to one another, and advice changes dynamically as the model tracks the user interaction. Furthermore, the system must have the ability to automatically take a recommended action if the the user agrees. In effect, we will have a program that not only gives advice, but also can accept the advice and act on it II Comparison with a Interpretation/Classification Model An expert model which interacts with a complex program can be thought of as an extension of the type of interpretation/classification models that have been widely used in medical consultation (such as CASNET and MYCINI, and geology (PROSPECTOR). Regardless of the underlying knowledge representation, these models are typically implemented as programs which ask questions of the user until enough evidence has been accumulated for the model to offer an interpretation. In situations where the human experts follow agreed upon procedures for eliciting evidence, the questions are often highly structured, and one can expect that the results will be reported in a systematic and ordered fashion. Although some of the results may be gathered by external sources (such as instruments), they are typically “filtered” through the user who enters them into the program. In a few situations, where most of the evidence is taken directly from instrument data, [3, 41, signal processing algorithms feed directly into the reasoning model. In a model that is completely integrated with a set of 322 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. complex applications programs, so that they appear to the user as a single program, important additional functional characteristics are needed that go beyond those found in the typical interpretation/classification type models: The advisory model must be able to receive data or evidence automatically from the application programs, in addition to that reported by the user. The logic of the model must be prepared to interpret evidence in real-time. The model must be able to not only suggest advice on the interpretation of evidence, but must also monitor how the user reacts to this advice in his subsequent choice of methods of analysis, and then provide new advice (to fit the dynamically changing situation) if the user requests it Yet, despite the above differences we can use an important analogy from the more traditional consultation systems: the set of applications programs can be made to communicate with the advice model primarily through the evidence, as long as we are willing to broaden our definition of what we consider to be evidence. We will now take evidence to include not only facts about the particular domain problem, but also the status of the actions taken by the user while interacting with the applications programs, and the results of key calculations performed by these programs as the result of the user’s actions. In the advisory model, we must represent classes and sequences of user actions, their consistency relationships, and expected effects. To be both effective yet flexible to the nuances of expert problem solving, the advisory system must monitor and interpret the actions of the user in a type of background mode. The user must have a considerable degree of freedom in the choice of methods and direction of analysis, yet should be discretely warned if his choice is leading to a likely dead-end, or is introducing harmful inconsistencies in the analysis or interpretation If the analysis is proceeding smoothly, as might be expected with an experienced user, the model should be available in an advisory and summarizing capacity, without interfering with the user’s control over the problem solving flow. The logic for the interpretive analysis can be stated in the form of production rules as in the usual interpretation/classification models. However, the designer of the model must take into account the newly enlarged scope of types and the dynamically varying nature of the evidence that must be handled. While in a traditional consultation system we expect evidence to remain relatively stable during a single consultation session, here, because the control and monitoring of the separate applications programs must be carried out in real time, evidence will change values very frequently in the course of a session. For example, an observation such as, Task A not performed, will be changed once Task A is done; or a numerical result may be received from one program, and then be changed as the consequence of further analysis and processing by another program. This type of situation resembles a consultation where someone is continually modifymg or updating the evtdence, and sequentially asking for an interpretation. There are some analogies to the medical problem of evaluating a patient over time. Here, however, the time frame is highly condensed into a reai- time evaluation. Another important difference with the usual interpretation/classification model is that the controlling model has the opportunity to go beyond just giving advice; it can provide the means of accepting the advice and taking the recommended action in real time. This step may prove somewhat more difficult for expert system builders than might be initially expected, since the model and the set of application programs must be Lnified in what appears to the user as a single system. In addition, a single task of the expert may require the compilation of many steps through one or several of the programs or even combining or adding steps that are not directly available in the application programs. Although the application programs may not have to be completely recoded, new code may be needed to integrate steps that reflect the expert’s particular approach to analyzing a problem. In our experience, we have found that this can be done successfully while building an overall expert system. III The ELAS Expert System for Well-Log Analysis The main goal of the ELAS project is to introduce methods of expert systems in well-log interpretation. In ELAS we are demonstrating how the knowledge and reasoning methods of an expert log analyst can be combined with INLAN - Amoco’s large scale interactive program for well log data analysis and display [51. The Dipmeter Advisor, which is a knowledge-based interpretation model for a very specialized type of log analysis, has been described previously by others [61. In our system, we are concentrating on the more widely used and more numerous suites of logs for which there is quite a large body of published [51 and unpublished results . Working with the expert analyst, we have observed the sequence of steps involved in solving certain typical interpretation problems, and encoded them as a set of advice rules using the EXPERT [71 formalism. Thus, more specific objectives in developing ELAS include: l formalizing methods of expert well-log analysis into a representation that can be easily used and understood by others; l providing interpretations based on the data, the actions, and expectations of the user and the program Our first prototype system is running on IBM’s VMKMS operating system using a RPQ terminal configuration. This configuration consists of a Tektronix graphics terminal with a joystick, an IBM CRT, and a single alphanumeric keyboard. The system allows the user to interactively perform experiments in the analysis of logs. Advice is generated based on the results of previous experiments, and a running summary is kept of the actions already taken. The advice system is fully integrated with INLAN, while based on the EXPERT production rule scheme. One of the contributions to a specific domain that comes from building an expert system is that it helps to structure and organize practical problem solving knowledge 323 in the domain. When many of the important methods of interpretation are informally specified, and personal to the experts in the domain, the expert systems formalization can be particularly valuable. By encouraging or even forcing a formal structure on the domain knowledge, and incorporating it into a runnabie expert system, the knowledge becomes testable, reproducible and more widely reportable to others. While many general software programs require expertise not only in the application domain but also in the use of the program, our objective in developing ELAS has been to produce a new, powerful yet easy to use experimental tool for the well-log analyst. To make interaction easy for users, the front-end of the ELAS system has as its top level a Master Panel, which holds a snapshot of the current status of the analysis of an already selected well (Figure ill- 1). j ADVICE 0 EXPFRT DEBUG : ’ 0 0 ‘0 L-J a a I3 0% WCNIJ 0 a u cl IU OTOTAL ANAl.YSTS Figure III -1: Preliminary Master Panel Most user-program communication is controlled through this master panel. it is displayed on the Tektronix graphics screen, and includes a concise set of key parameters and tasks that are crucial in well log analysis. Here, a parameter may be a constant, a log (represented as a vector of digitized values for each foot of depth in the well), or an expected characteristic of the well (e.g. the presence of gas in some zone). The value of the parameter will influence the results of the subsequent analysis. Whereas the original software was an interactive system that required specification of many detailed operations, our task has been to develop a much higher level system that compiles many of these smaller steps. At the initial user level there is a superficial similarity to VisiCalc, the highly successf ui personal computer program. In the simpler environment of VisiCalc, we see a program that presents information in a concise format and allows the user to vary a parameter and then watch all dependent results change. in our case, we are faced with a much more complicated computational task, but we too try to show the propagation of effects that follow from the user’s change of a parameter value or choice of analysis method within as short a time as possible, ranging from almost instantaneous to many seconds. This is done by updating the master panel at the top level, from which the user will be led to more detailed panels or displays for the specific methods. Changing a parameter may imply quite a large number of computational steps and not ail information can be described in a simple tabular format. In ELAS the user can direct both the mathematical analysis and the interpretive analysis by changing parameters or invoking tasks through the master panel. As just stated, the outcomes of mathematical analyses that follow are then reported back to the user through this same panel. The expert system keeps updating its interpretive analysis after every change in the evidence so that it atways reflects the current status of the panel. The system also synchronizes all derived logs which are affected by changes in the panel parameters or methods. Changes are made either through user action or updates in the mathematical analysis. The user has the freedom to carry out an entire well log analysis sequence without ever asking for advice from the system, or he can proceed to request advice for any stage where he feels the need or curiosity for it. The example in Figure 111- 1 shows the master panel for a sample test well, TEST UNlT 9 1 134. The five columns correspond to five different geological zones (by depth) which have been chosen for analysis. The rows correspond to the parameters for the zones. initial values for some of the parameters must be suppried by the user. Many of them, however, may be obtained through subsequent analysis. Some parameters may stand for specific tasks that the user might want to invoke to help in the analysis. Consider another sample test well, NO. 25 TEST FiELD UNIT, for which the user wants to examine (on his own, or following the advice of the model) the effect of lithology on the calculation of rock porosity in a zone. The appropriate box on the panel for that method and zone is marked. The marking of that box causes the system to invoke the task, displaying first a crossplot of the zone depth points with the various lithological choices made available for further analysis (Figure 111-2). These are shown as the SS (sandstone), LMST (limestone), and DOL (dolomite) boxes on the crossplot Once the user has made his choices (of two of these boxes, in this particular instance), the program calculates the corrected logs using the lithologicai assumptions, and then returns to the main panel, which now reflects any updates due to this task invocation. This is an example of a single step in an incremental analysis. If the user wishes to see the full extent of analyses that follow a particular change in parameter value, all he needs to do is mark the TOTAL ANALYSlS box on the side of the master panel, and a set of derived logs, which are particularly conclusive, will be displayed. In addition, the interpretation for each geological zone will be given. The logs displayed in Figure 111-3 were plotted using a batch plotter program. They represent the type of display we are designing, although the current version of ELAS does not shade the overlaid portions of logs while displaying them. The system also maintains consistency between dependent tasks, which is necessary whenever a significant parameter changes values. This can involve computations 324 i.8 NO. 25 TEST FIELD UNIT . . l * Z.OQ. 2.2Q. 2.4Q. DETECT 2 I 2.BQ CORRECTED 0. 10. 20. 30. ao. ON l -_c-._ -me__- Np ______II-_^._--.--- .ITWO PI41 I BOXES TO STORE .OC AM EXIT. Figure III -2: Neutron Density Crossplot 100 SFTUHRTICJN Q GRSSRI 1 OILSRT 1 5u 3900 I_ Figure 111-3: Log Display ranging from almost negligible to very formidable. For example, one of the most frequently used formulas in log analysis is: SW = FoR~/R,).~ !l) where the variables are quantities that can be changed through the master panel. Even a minor change in one of the variables in this equation involves a recomputation for all the points in the log, which are usually in the thousands. This will immediately cause reinterpretation and revision of previous conclusions and recommendations. In summary, we are emphasizing that the formalization of the methods may very well require a new organization and presentation of the original software, and may not be quite as simple as feeding numbers and arguments back and forth between the original software and the model. A clean interface is of course an important ingredient in the ultimate success of the program. In our case, the system will lead the user into many specialized routines associated with a particular task. Often, the setting of a key parameter may require a separate analysis in itself. The well log analysis model considers 3 types of advice which can be described in a production rule framework. 1. interpretation of existing evidence and past actions 2. advice on future actions 3. consistency checking In a system such as ELAS, there are many possible reasoning paths that the user may take, most of which are under user control. Thus depending on the user’s prior actions, the model may give radically different advice even for the same initial data. Furthermore, depending on its interpretation of the status of analysis and user actions up to a given stage, the model will make recommendations to try out various preferred methods for subsequent analysis. The user also has the choice to enter certain a-priori information about the problem, such as whether one ought to expect gas in the well. In this case, if we were told not to expect gas, but gas is indicated by an analysis of some of the logs, we can proceed to get clues as to whether the method of analysis might be at fault, whether the logs are noisy or otherwise inaccurate, or whether some underlying assumption is unjustified, etc. This is an example of the kind of consistency checking that involves both the log data and the interplay betw,een the various methods of analysis. While actual advice’ depends on the types of analyses that are specific to a domain, we have found that they are all readily representable in a production rule framework. IV Communication Between Programs Because we are starting with existing software, we initially see two separate programs, the original software (INLAN) and the interpretation program (EXPERT). To integrate the two, we need a means of communication, which involves automatically filling in the arguments of some of the evidence necessary to correctly invoke the production rules. Figure IV- 1 is a simple illustration of such a production rule, where the finding of gas can be communicated once certain tasks have been performed in the well log analysis program. This requires that the original program record such information and pass it on to the interpretation program. Because both programs are written in FORTRAN, communication is relatively straightforward. 325 If: The neutron-density crossplot has been performed, and gas is found, and the current porosity log has not been gas corrected Then: The following advice is given: The porosity logs may be gas corrected by Methods a, 6. Figure IV-l: Example of a rule for advising on methods Secondly, we need to communicate back to the well log program so that the user may now choose whether to accept the advice or not. The selection of methods through the master panel and other screens is menu oriented and therefore, we can use a dynamic scheme to place an item on the menu. A production rule may then be invoked to indicate the circumstances under which the method is displayed on the menu and is available for the user to select Thus in the example of Figure IV- 1 Methods a and b would appear on the menu when the production rule is satisfied. Figure IV-2 gives an overview of communication channels used in ELAS. Figure IV-P: Overview of ELAS Communication V Conclusions and Future Directions A major challenge of this project is to develop an expert system that is effective and productive in a realistic, large scale application. In the present project there is a strong incentive to encode expert knowledge and make this knowledge available to others in the field because of the scarcity and cost of expert interpreters. The building of such a system will allow us to gain important insight into the complete spectrum of tasks needed for taking an expert system from conception and design to actual use in a real world setting. Another long-term goal of the ELAS project is to abstract some of the general principles of knowledge organization and systems design that are applicable to signal interpretation problems of whch well log analysis is an example. Of particular interest to us is the development of more general schemes to interface an interpretive program to existing applications software. We have already established some of the mechanisms in this project, but a more structured approach may prove valuable in addition to a production rule scheme. Formal representation issues regarding multi-level expert systems are being presently studied within the framework of the ELAS system. Specification of inter-level communication, such as task dependencies, task-observation relations, and consistency invocations should eventually have schemes for easy encoding in the knowledge base. For example, task dependencies, such as if Task A is redone then Task 6 must also be redone, should be more concisely representable than in a pure production rule knowledge base. The system is expected to be operational in Amoco’s regional offices in the latter part of this year. If well- received, we expect that the system will grow in expertise, benefiting from the contributions of many knowledgeable users. Cl1 c23 C3l L-41 c53 ES3 C7l VI References Hart, P. “Directions for Al in the Eighties.” SIGART Newletter. 79 (1982) 1 l- 16. Bennett, James S. and Engelmore, Robert S. “SACON: A Knowledge-Based Consultant for Structural Analysis.” In Proceedings of the Sixth International Joint Conference on Artificial Intelligence. Tokyo, Japan, 1979, 47-49. Weiss, S., Kulikowski, C., Galen, R. “Developing Microprocessor Based Expert Models for Instrument Interpretation.” In Proceedings of the Seventh International Joint Conference on Artificial intelligence. Vancouver, Canada, 198 1, 853-855. Fagan, Lawrence M. “Ventilator Manager: A Program to Provide On-Line Consultative Advice in the Intensive Care Unit”, Technical report HPP-78- 16, Heuristic Programming Project, Stanford University, September 1978. Martner, Samuel T. and Brigham, Robert J. “An Interactive Computer System for Well Log Analysis.” In SPWLA Sixteenth Annual Logging Symposium. , June 4-7, 1975, . Davis, R., Austin, H, Carlbom, I., Frawley, B., Pruchnik, P., Sneiderman, R., Gilreath., J. “The Dipmeter Advisor: Interpretation of Geologic Signals.” In Proceedings of the Seventh f nternational Joint Conference on Artificial Intelligence. Vancouver, Canada, 198 1, 846-849. Weiss, Sholom, and Kulikowski, Casimir “EXPERT: A System for Developing Consultation Models.” In Proceedings of the Sixth International Joint Conference on Arti f icia f Intelligence. Tokyo, Japan, 1979, 942-947.
1982
6
156
SALIENCE AS A SIMPLIFYING METAPHOR FOR NATURAL LANGUAGE GENERATION David D. McDonald and E. Jeffery Conklin Department of Computer and Information Science University of Massachusetts Amherst, Massachusetts 01002 USA1 Abstract We have developed a simple yet effective technique for planning the generation of natural language texts that describe photographs of natural scenes as processed by the UMass VISIONS system. The texts follow the ordering on the scene’ s objects that is imposed by their visual salience -- an ordering which we believe is naturally computed as a by-product of visual processing, and thus is available -- for free -- as the basis for generating simple but effective texts without requiring the complex planning machinery often applied in generation. We suggest that it should be possible to find structural analogs to visual salience in other domains and to build comparably simple generation schemes based on them. We look briefly at how one such analogy might be drawn for the task of tutoring novice PASCAL programmers. I Natural language generation and the superhuman-human fallacy Taken in its general form, the problem of deciding what to say is a planning problem of great complexity. When speaking carefully and deliberately a per son will attempt to simultaneously satisfy many goals from different sources: rhetorical, tutorial, affective, and descriptive, among others. Utterances are intended to obey strict constraints deriving from the limited expressive power of the syntax and vocabulary of natural language and from the requirement to maintain the linguistic coherency of the discourse context established by what has been said up to that point. In addition utterances must do all this while being reasonably short in length and precise in style if the audience is not to become bored or confused. It is no wonder, then, that the ability to speak or write well does not come easily. Even though we all use language constantly, relatively few of us have the skill of a Mark Twain or a Winston Churchill. The requirements of everyday communication do not appear to require optimum linguistic performance. 1. This report describes work done in the Department of Computer and Information Science at the University of Massachusetts. It was supported in part by National Science Foundation grant IST 8104984 (Michael Arbib and David McDonald, Co-Principal Investigators). In this light, we must consider whether we have been making the generation probelm for computers more difficult than it actually is for people -- the superhuman-human fallacy. Should we require our computers to speak any more effectively than we do ourselves? Most of us, as we speak, notice when we have left something out or inadvertently given the wrong emphasis, and we correct our mistakes by interrupting or modifying what we were about to say next; in explanations we use feedback from our audience such as questions or puzzled looks to dynamically adjust our vocabulary and level of detail. We should seriously consider designing our natural language generation systems on a similar basis: adopting an expedient and computationally efficient, if I’ leaky” , planning process and compensating for it by monitoring and attending to user questions. At the University of Massachusetts we have developed just such an expedient planning system, which we use in conjunction with a highly efficient (i.e. quasi-realtime) text generator. Taking as input a simulation of the output of a computer vision system, the planner determines the order in which objects will be mentioned in the text and what will be said about them, feeding this information via a pipeline to the generator where grammatical constraints determine the exact phrasing and local rules (such as pronominalization and ellipsis) are applied to maintain the coherency of the discourse. The key to the planner’s simplicity is its reliance on the notion of “salience” -- objects are introduced into the text according to their relative importance in the conceptual source of the text. The decision as to what objects, properties, and relations to leave out -- a source of considerable labor in some generation systems (e.g. Mann and Moore [61, McKeown [51> -- is handled trivially here by defining a cut-off salience rating below which objects are ignored. The task for which we d ev el oped this facility, the production of short paragraphs describing photographs of houses, is deliberately one in which the common sense notion of visual salience is vivid and widely shared by members of this culture. People interpret what is important about a picture -- what it is a picture “oftr -- according to a shared set of conventions involving the size and centrality of the objects shown, coupled with a sense of what is normal or expected : a large stained-glass window on an otherwise ordinary New England farm house would be highly salient; 75 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. similarly a normally unimportant part of the scene, such as the mailbox’ can be artificially raised in salience if framed prominently in the foreground of the picture. II Our Generation System As of this writing, the salience-based planner (the subject of Conklin’s PhD. thesis) has been implemented and its pipeline to the text generator (McDonald’s system MUMBLE [91> hand simulated. The house scenes which are the source of the text are very similar to those used in the research of the UMass “VISIONS” system [ 101 (see Figure 1); their representation is also presently hand-simulated: the planner works from a KL-ONE data base of the objects in the scene and the spatial relations between them which was designed in close collaboration with members of the VISIONS project, and which reflects the actual kinds of informatio.) they expect to extract from a visual scene. The salience ratings with which the objects in the visual representation are annotated were derived empirically through extensive psychological testing of human subjects C31, where the subjects both rated the objects in each of a series of Fig. 1. One of the pictures used in the studies and an example of the kind of descriptive paragraph that subjects wrote about it. “This is a picture of a white house with a fence in front of it. The house has a red door and the fence has a red gate. There is a driveway beside the house, and a tree next to the driveway. In the foreground is a mailbox. It is a cloudy day in winter .” ____-~--.____----.- __--_-..-- ---. .---.. _ _.______ __ __.___ _ pictures on a zero to seven scale, and wrote short paragraphs describing the scenes. The objects’ ratings were quite consistent across subjects and sessions of the experiment. The paragraphs provide an objective base-line for the kind of style and overall organization that should be generated by the system. Given the salience data, the planning algorithm runs as follows (see also [41): the objects in the scene are placed in a list -- the “Unused Salient Object List” -- in decreasing order from most to least salient. The properties of the objects (such as color, size, or style) and their relative spatial relations can be accessed from the general scene data base when desired; one can, for ex amp1 e , ask for the most salient relationship in which a particular object is involved (by definition relations acquire their salience from the objects they relate). Objects are taken from the “Unused Salient Object List” (shortening the list in the process), packaged with se1 ec ted properties and relations, and sent to the generator by the action of a collection of strictly local rhetorical rules. The rules are couched as productions, have relative priorities, and are organized into packets according to when they apply -- essentially the same architecture as Marcus used in his natural 1 anguage parser [71. This architecture allows us to incorporate object-specific rules (such as that one always sees houses introduced with one of their properties: “a white house” or “a New England farm house”, and never simply as “a house” > and al so simple stylistic rules, such as maintaining sentences of an appropriate length. The process proceeds by successively taking the first object on the list (i.e. the most salient unmentioned object), making it the local “current item” ’ and proceeding to describe the most salient properties and relations, finally ‘1popping’1 the list of unmentioned objects and moving on to describe the next most salient object. The scene descriptions produced by this process will never win a prize for good literature. They are, however, apparently effective as descriptions: as judged by (so far only a few) informal trials, paragraphs generated automatically on the basis of the salience ratings derived from the experiments are effective in picking out which picture they correspond to from others of the same material but taken from a different camera angle. Fur thermore they provide a base line for potentially measuring the If value-added” of a full-scale global planning system that wculd be capable of reasoning about and directing the larger-scale rhetorical organization of the text (say, one on the model of Appelt C 11, or McKeown [51). III Where does salience come from? -- -- We claim that the annotation of an object’s visual salience can be provided as a natural part of the perception process. For example, one aspect of salience stems from unexpectedness: items which are not predicted by, or are inconsistent with, high-level world know1 edge are unusual and therefore salient. Also’ an item’s size and centrality in the picture are clearly factors in that item’s salience. Specifically, the record of an object’s relative salience would arise from the perceptual process’s explicitly combining: 1) the weighting the object contributed to the judgement that the scene was what is was, 2) the object’s 76 intrinsic salience (from general world knowledge, which do not have a good ;*fit"* with their slot in e.g. people are intrinsically more salient than the hypothesized frame) are important to the bushes in conventional pictures), and 3) the amount efficient allocation of resources, and would likely of **extra effort** that was required to resolve the be annotated with some measure of their **goodness assignment of the object to a slot of the frame of fit**. Finally, information about the intrinsic when default assumptions were violated. The importance of various items in the scene might be salience annotation of the visual representation is useful in the allocation of additional resources to consequently provided as a direct part of the the confirmation of their indentification (e.g., if perceptual analysis of the picture, or is no more the system is told, as part of its world knowledge, than minimal additional on-line computation. (Moreover, that people are intrinsically important, it would the perceptual analysis is the only stage at which salience values can be reasonably want to be especially sure when it identified image regions as people). cheaply computed.) As a result, a salience-based planner consumes less overall computational effort To summarize, these are the elements of model than a more conventional language planner -- the building for which the notion of salience is salience information is provided at no additional especially important: computational 1) structural knowledge about cost by an already present, where. in the external field of data to focus non-linguistic process, and it acts as a powerful heuristic in guiding the process of deciding what resources initially (e.g. size and centrality); 2) to say and what to leave out. use of a measure of **goodness of fit** to direct the competition and instantiation of generic frames: While the common sense concept of **sa1ience** and 3) a priori knowledge about what **objects**, if -- applies most naturally to perceptual domains, if we found in the '*worldtf, are particularly important to are to draw on salience as an organizing metaphor the system (i.e. intrinsic salience). for language generation in non-perceptual domains, IV Salience in a tutoring task such as tutoring or explanations -- of physical processes, then we must step back from the specific We are beginning to see this confluence of measures listed above. What role does salience model-building play in the coordination of our thinking about a knowledge sources plus deviation picture? What kinds of computational processes from defaults as perhaps the source of saliency in another domain where we are working on natural does it reflect? language generation: the planning of tutorial dialogues. To answer these questions we must cast the **@;0a1s** of the visual process in terms that carry Beverly Woolf, an advanced graduate student over into non-perceptual domains. The approach of working with McDonald, the VISIONS system is to combine bottom-up analysis is extending the work of the MENO-II project [ill on identifying conceptual of regions and edges in the raw visual image with misconceptions in simple loop programs, so as to top-down testing of frame-based hypotheses about develop a program capable of tutoring the student the subject matter of scene. The VISIONS system is on a selected subclass of these misconceptions. thus model-driven once it moves beyond the Analogous to the parameters of size and centrality, low-level vision problem of identifying regions. the MENO-II project For example, has knowledge about PASCAL once the system has enough edge and errors and their relationships to various region data to suggest that there is a house in the image misconceptions: this is the starting point for the it will apply its generic knowledge about tutor's analysis. Analogous to the VISIONS what parts houses typically have and how they are system's generic knowledge about possible objects typically spatially arrayed to attempt to impose and scenes, identifications on regions which would otherwise be the MENO-II project has a very rich ambiguous. knowledge base of the plans and schemas'inherent in Note that even if the image is actually correct a picture of, say, loop programs and their relationship to a boat in the water, it is still the student's coherent (but inappropriate to possible that elements of the boat's image might PASCAL) model of the algorithms: a rich, initially trigger the house hypthesis; in this case hierarchically organized KLINE representation is elements of the picture which were inconsistent' used for this purpose, including a taxonomy of with the house scene frame, such as the blue color common misconceptions. Finally, analogous to of the ground plane, would be vital in cutting off VISION's a priori knowledge about intrinsic expensive attempts to further instantiate that importancerthe tutoring system has certain **bugs** frame. and misconceptions flagged as especially important and revealing if found in the model of the student. Broadly speaking, the process of perception can be viewed as a process of building an internal The tutoring program is still in the early model of some external **wor1d** based on **sensory** design stages, consequently we cannot yet be data from that world and generic knowledge about certain that our strategy of applying the salience it. In this light the components of salience can metaphor to this problem will succeed. However, be described more abstractly. First, the system our intention is as follows: the model of the relies on the conventions of centrality and size of history of the student will be projected onto the a region to direct its attention so that its first general knowledge base of program techniques and analyses are of those parts of the photograph which typical misconceptions, where it will be used to are most likely to yield a potent model for identify those parts of the knowledge base which identifying the rest of the scene. Second, are most relevant to the student's problem, and to elements of the image which are unexpected (i.e. suggest a tutoring strategy which will build most 77 effectively on what the student already knows and will carry them forward to the most appropriate programming concepts for them to learn next. Said another way, the student's buggy program will typically contain several misconceptions on which the student could be tutored. The general knowledge base (interpreted now in terms of plausible tutoring strategies, i.e. alternative sequences of examples and probing questions) will provide a backdrop on which to project the student's specific history so as to pick out the best strategy for that case. By monitoring this analysis process we should be able to annotate specific concepts in the knowledge base and points in the buggy program with a rating analogous to that of visual salience: this annotation will yield the ordering in which those points and concepts are to be taken up during the tutoring session. References r11 [21 [31 141 c51 161 i71 181 c91 Appelt, D. Planning Natural Language Utterances to Satisfy Multiple Goals, Ph.D. -~ Disseration, Stanford University (to appear as a technical report from SRI International) (1982). Conklin E. J. (in preparation) PhD. Dissertation, COINS, University of Massachusetts, Amherst, 01003. ----- and Ehrlich K. (in preparation) vAn Empirical Investigation of Visual Salience and its Use in Natural Language Processing", Technical Report, COINS, U. Mass., Amherst, Ma. 01003. ----- and McDonald "Salience: The Key to the Selection Problem in Natural Language Generation**, in the Proceedings of the Association for Computational Linguistics, Toronto, Canada, 1982. McKeown, K. R. Generating Natural Language Text in Response to Questiomt the Data Base Structure, Ph.D. --- -- Dissertation, Moore School of Electrical Engineering, University of Pennsylvania, 1982. Mann, W. and Moore, J. "Computer Generation of Multiparagraph Text", American Journal of Computational Linguistics, 7:1, Jan-Mar 1981, pp 17-29. Marcus, M. A Theory of Syntactic Recognition --- for Natural Language, MIT Press, Cambridge, Masswts, 1980. McDonald, David D. "Language Generation: the source of the dictionary", in the Proceedings of the Annual Conference of the Association for Computational Linguistics, Stanford University, June, 1981. -----9 Natural Language Generation as a ~-- Process ofsion Making under Constraint, dissertation, MIT,gF PhD. [lOI Parma, Cesare C., Hanson, A. R., and Riseman, E. M. "Experiments in Schema-Driven Interpretation of a Natural Scene**, in Digital Image Processing, Simon, J. C. and Harar R. M. (Eds), D. Reidel Publishing Co., Dordrecht, Holland, 1980 pp 303-334. Cl11 Soloway, Elliot, Beverly Woolf, Eric Rubin, and Paul Barth **Meno-II: An Intelligent Tutoring System for Novice Programmers", Proceedings of International Joint Conference in Artificial Intelligence, Vancouver, British Columbia, 1981. 78
1982
60
157
COKEPTLJAL DEPENDENCY AND MONTAGUE GRAMMAR: A STEP TOWARD CONCILIATICN Mark A. Jones David S. Warren" State University of New York at Stony Brook Long Island, New York 11794 ABSTRACT In attempting to establish a common basis from which the approaches and results can be compared, we have taken a conciliatory attitude toward nat- ural language research in the conceptual dependency (CD) paradigm and Montague Grammar (MG) formalism. Although these two approaches may seem to be strange bedfellows indeed with often noticeably different perspectives, we have observed many commonalities. We begin with a brief description of the problem view and ontology of each and then create a formulation of CD as logic. We then give "conceptual" MG translations for the words in an example sentence which we use in approximating a word-based parsing style. Finally, we make some suggestions regarding further extensions of logic to introduce higher level representations. I INTRODUCTION In the past decade a series of 'process models" have been developed that attempt to capture various aspects of natural language understanding. We refer primarily to models having some form of underlying conceptual representation such as Schank's conceptual dependency (CD) notation [l] and, possibly, higher level knowledge structures such as scripts [2] (or frames [3]), plans [4], MOPS [S], TAUs [6], etc. that allow inferences to be made. Since we shall focus primarily on the systems of Schank's group we will refer to these models as CD models. The language understanding mechanisms in CD models have been explained by example, by English prose, and by the publication of micro versions (programs) [7], but not yet in any truly formal way that would facilitate compar- ison to other approaches and evaluation of alter- native representational choices. On the other hand, one of the most formally elaborated systems for natural language description is Montague Grammar (MG) [8,9]. MG is a logic system based on the typed lambda calculus that is capable of expressing modality, tense, intension, extension, etc. It provides for a particularly extensive treatment of reference (quantification, possible worlds, etc.). The emphases and goals of the CD and MG research have not generally coincided and a direct ---------- * This material is based upon work supported by the National Science Foundation under grant #IST-8010834. comparison is difficult. The CD systems have attempted to model cognitive processes and have focused on contextual (story) understanding, sum- marization and question answering tasks. NG has been a research tool for language philosophers and linguists, has not been computationally applied ([lo] is an exception), and has been directed prim- arily at declarative sentences in isolation. The remainder of this paper will attempt to formally characterize aspects of the CD systems by extending the HG framework to accommodate the objects and processes used in CD systems for contextual under- standing. It is our hope that we can make the reference strategies and representational choices in CD systems perspicuous, and also extend the PIG formalism to include a wider range of phenomena. II ONTOLOGY OF CD AXD KG The basic CD ontology views the world in terms of (a) picture producers (PPs), which correspond to real world entities, (b) real world events (oc- currences of acts), (c) states, and (d) temporal and causal relations that may exist between pairs of events or states. Schank maintains that a rela- tively small number of distinct types of primitive acts, states and relations combine in a variety cf ways to represent simple physical events and their interactions. Scripts, plans, goals, themes, MOPS, TAUs, etc. have been proposed as useful ways to represent general knowledge about particular configurations of CD objects for the purpose of inference and disambiguation. A PP is either explicitly introduced in a noun phrase (e.g., John, a man, the boy in the blue coat) or is implicitly introduced via refer- ence to a higher level representation such as a script in which it participates (or can be inferred to participate). In sentence (I), the policeman is implicitly introduced in the first clause and only by that introduction can the pronominal reference 'he" in the second clause be understood. Simi- larly, in sentence (2), the use of the definite referent "the" is not odd despite the fact that a policeman has not yet been explicitly introduced. (1) I was stopped yesterday for speeding, but he didn't give me a ticket. (2) I was stopped yesterday for speeding, but the policeman didn't give me a ticket. A conceptual analysis of a natural language 79 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. expression requires that the underlying acts, states and relations be identified. "John killed Mary" is analyzed as "John did some (unspecified) action which resulted (by result causation) in Mary undergoing a state change from alive to dead" CL P.501. The action may be specified or inferred from other phrases added to the sentence or the context in which the sentence appears. The ontology of MG is based on the set of truth values (true, false), the set of entities (PPs of CD), and the set of indices consisting of possible worlds and points of time. A function space is constructed inductively from these basic sets. These include sets of entities, sets of sets of entities, etc. For example, common nouns are treated as denoting sets of entities. For most of our discussion here we will consider only extensional aspects of MG and so the set of possible worlds does not enter. An important aspect of MG is the close coup- ling of syntax and semantics. A Montague Grammar consists of an inductive definition of the set of meaningful phrases of English. The model-theoretic interpretation of each phrase is defined recur- sively over its syntactic derivation. For perspi- cuity, this interpretation is defined by means of a translation into a typed lambda calculus for which a model-theoretic semantics has previously been provided. So even though most of the mechan- ics of manipulating phrases and meanings in MG look like syntactic operations of lambda formulas (or LISP-like code), we should bear in mind that the real semantic objects are such things as sets of entities and their properties and not the lambda formulas themselves. One contribution of MG is the unified treat- ment of quantified NPs (e.g., every man, a woman) and proper nouns (e.g., John). This is achieved by considering all noun phrases as referring to sets of properties of entities. So the meaning of the proper name "John" is taken to be the set of properties of the entity john; in symbols XP (P (j ohd) . (S ome intuition can be gained by con- sidering this as a LISP function that takes a pred- icate as argument and returns the result of apply- ing that predicate to the atom JOHN: (LAMBDA (P) (P JOHN)).) Intransitive verbs are translated to sets of entities, i.e., simple predicates. E.g., the intransitive verb "walk" has as its meaning a set of entities: those that walk; in the lambda calcu- lus this is referred to by a predicate walk'. (In LISP a predicate (LAMBDA (X) (WALK X)), or just WALK.) The meaning of a sentence is obtained by applying the function that is the meaning of the subject to the predicate that is the meaning of the verb phrase. Thus "John walks" has the meaning [AP (P(john))] (walk') which A-reduces to walk'(john) The advantage of this added complexity is that it also handles quantified noun phrases. For ex- ample, "a woman" translates to the set of proper- ties that are true of some woman: XP ( 3x (woman'(x) A P(x))). The sentence rule given above generates the meaning of "A woman walks" as -6 [XP ( 3x (woman'(x) A P(x)))] (walk') 1> 3x (woman'(x) A walk'(x)) Similarly, "every woman walks" is T A [XP (Vx (woman'(x) -> P(x)))] (walk') - vx (woman'(x) --> walk'(x)) Another contribution of MG is its handling of pronominalization and coreference. This is done by introducing syntactic variables as NPs and then substituting a normal NP for the first occurrence of a particular variable in a phrase and appropri- ate pronouns for subsequent occurrences of the same variable. For example, we can first generate the sentence "y walks and y talks", where y is a syntactic NP variable. Then we substitute the NP "a man" and obtain "a man walks and he talks". The corresponding semantic rule is to X-abstract the variable substituted for over the sentence translation and then to apply to that result the translation of the substituted NP. (This asserts that the NP substituted in has the property defined by the sentence with respect to the variable sub- stituted for.) For example, for the sentence above, we get LAP (yx(man'(x) A P(x)))] (AY (walk’(y) A talk'(y))) k 3x ( man'(x) A [Xy (walk'(y) A talk'(y))l (x>> & 3x (man'(x) A walk'(x) A talk'(x)) This substitution mechanism provides a very power- ful and flexible way to bind various occurrences of a variable to an entity or, in CD terms, to fill many slots with the same referent. III A CONC ILIATION Now that we have introduced the two approaches we move to an example that shows how they might overlap. We formulate CD as logic in order to facilitate the comparison (cf. [ll]). We will assume a sorted and typed lambda calculus in which there is an IS-A hierarchy of entities which dis- tinguishes among and within PPs, acts, and states. The notation xi: //<class> is used to indicate that the variable x. takes values from the sort, #<class>. Co&ider the conceptual case frame for "John walked to a store": (Pmw (ACTOR Hut@) (OBJECT HuMld) (FROM NIL) (TO PHYSOBJ@) (INST (MOVE (ACTOR HUM& (OBJECT (BODYPAKT (TYPE (FEET) ) > ) (TIME TIM@) > > (TIME TIME@ 80 where HUM6 is: (CLASS ( #PIw~N) > (FIRSTNAME (JOHN)) (GENDER (~Asc)) TIME@ is: (CLASS (#TIME)) (BEFORE (*NOW*) PHYSOBJB is: (CLASS (#STORE)) We propose the following logical representation of this case frame: 3x&%VENT 3x2. *#PERSON 3x3: #PHY~~BJ 3x4: /BTORE 3X5:fmEm 3X6:#T1ME [PTRANS(xl) A ACTOR(xl,x2) A OBJECT(xl,x2) A FROM(xl,x3) A TO(X19X4) A INST(xl,x5) A MOVE(xg) A ACTOR(x5,x2) A OBJECT(x5,FEET(x2)) A T?Mi%(x5,xs) A TIME(xl,xs) A FIRSTNAME(x~,JOHN)" @~TDER~x~,~SC) A BEFORE(x6,*NO~)I We can abbreviate this formula, to appear more like the CD case frames, as: ~x~:#EVENT IX~:#PER~ON 3x3: #PHYSOBJ 3x4: #STORE 3X5:/hmNT 3X6:#T1ME [(PTRANS x1 (ACTOR x2) (OBJECT x2) (FROM x3) (TO X4) (INST (MOVE x5 (ACTOR x2) (OBJECT (FEET x2)) (TIME x6))) (TIME 4) A (#PERSON X 2 (FIRSTNAME (JOHN)) (GENDER (~Asc))) A (#TIME X6 (BEFORE (*Now*) ) ) A(IFSTORE x4)] The table below gives translations for the words that occur in our example sentence, 'John walked to a store'. For each word, the table gives a syntactic rule that describes how it combines with related expressions, and a corresponding semantic rule that shows how their meanings combine. 'word' Translation [[word]] 1 Syntactic Rule Semantic Rule John hP (+:#PERSON 'John' [['John']] p(Y) A (#PERSON y / I (FIRSTNAME (JOHN)) (GENDER (~Asc)))) store #STORE 'store' LL'store']] a AQ XI' (3y:Q P(Y) A 'a' B:common noun [[‘a’11 (CCBII) Q(Y)) '-ed n (~~:#TIME a:verb '-ed' r[‘-ed’ll Oz[t~ll) P(Y) A (#TIME y (BEFORE (*NOW*)))) walk* lxl:#EVENT 3x5:#EVENT The PP-expressions For the first (PTRANS x1 y&ANI, (ACTOR ~2) Y~:#PHYSOBJ, PP-expression yi, (OBJECT y2) y4:#PHYSOBJ, and CCy,II Oyi [balk!/l> = ~1. wj+l is formed by taking (FROM ~3) y6:#TIME appear in another PP, yk, & forming (TO ~4) syntactically [ [ykll CAY, (wj > > f Or (INST (MOVE x5 appropriate j=1,2 ,... (ACTOR y2) sequences with (OBJECT 'walk' 9 e.g., ~2 (FEET y2)) appears before (TIME Y6))) walk, y4 may be (TIME y6)) marked with 'to', etc. *This is a slight simplification -- to be precise we would add syntactic variables in an analogous way to Montague's use of he.. 1 81 The tree in Figure 1 shows how the word trans- For perspicuity, we have chosen bound variable lations are combined using the semantic rules given names (e.g., X6) to correspond to the names in the in the table to form the meaning of the sentence. CD case frame of the example. John walk -ed [['Johnfl] ~L'walk']] [[I-ed']] 0yfj [['walk']]> h (#TIME X6 (BEFORE (*Now*) > ) > 1 (~Y6[['walk'll) 0y6 [['walk']]} (x6) A (#TIME x6 (BEFORE (*NOW)) CC’a’ll (CC’store’ll> =x-> Al' (3x4:bSTORE P(x4) A (#STORE x4)) k> w1 = 3X6 :#Txt’, 3x1 : #EVENT 3X5 : #EVENT (PTRANS x1 (ACTOR y2) i (OBJECT y2) {T"F;4;3) (INST (MOVE x5 (ACTOR ~7~) (OBJECT (FEET ~2)) (T-E X6 > > > (TIME x6)) h (#TIME X6 (BEFORE (*Now*) ) ) ~['John'll (Xy2 (~1)) A x ==> ==> \q2 = 3 X2 : #PE!?$oN 3 X6 : #TIME 3 Xl : #EmNT 3 X5 : #EVENT (PTRANS x1 (ACTOR x2) (OBJECT x2) ;Fy;4;3) T (INST (MOVE x5 (ACTOR x2) (OBJECT (FEET x2)) (TIME x6)))) h (#TIME X6 (BEFORE (*Now*>) A (#PERSON X 2 (FIRSTNAME (JOEN)) (GENDER (YAW)) {AP (3x4:#STORE P(x4) A (#STORE x4)11 (XY, (w2)) -x-> 3x4:#STORE (Xy4 (w2)1 (x4) A (#STORE x4) =L> 3x4:#STORE 3x2:#PERSON 3x45:#TIME 3~ l:J/~~~~T 3x5 #EVENT (PTRANS x1 (ACTOR x2) (OBJECT x2) (FROM y3) (TO x4) (INST (MOVE x5 (ACTOR x2) (OBJECT (FEET x2)) (TIME x6))) (TnfE x6)) A(#TIME x2 (BEFORE (*Now*))) A (#PERSON x2 (FIRSTNAME (JOHN) ) (GENDER (USC))) A (#STORE x4) Figure 1: Parse Tree Example 82 The tree in Figure 1 reproduces a particular order of applying the rules which corresponds to a particular demon firing sequence in a CD parser. Variables which remain free in the final semantic representation, e.g. x3, are assumed to be exis- tentially quantified. An important area for future investigation is the logical analysis of inference processes. Low level inference processes are quite naturally han- dled in logic. For example, the inference rule below states, "if x2 EXPELS x3 from x4 to x5, then x3 was previously INGESTed". vX1X2X3X4X5X6 (EXPEL Xl ACTOR X2 OBJECT X3 FROM X4 TO x5 TIME x6) * 3X7X8X9Xlo ((INGEST X7 ACTOR X2 OBJECT X3 FROM x8 TO xg TIME x10) A(ihIm Xl0 (BEFORE X6))) Higher level inference processes such as script application may be viewed as generalized configu- rations of acts and states with quantified script variables (associated PPs that participate in the script). By matching the patterns that arise in processing text with a generalized script configu- ration, references such as the policeman in sen- tences (1) and (2) can be computed. The instanti- ation of the "speeding" script in the first clause introduces the existentially quantified policeman that the referent in the second clause requires. IV CONCLUSION We have presented here a first step towards a conciliation of CD and MG. This hesitant step was taken at the expense of some simplifications in both approaches; we admittedly have not included important aspects of each. We believe, however, that we have found a common base that will allow the further interaction and development of each theory. The stage is set for more extensive communication in which the ideas important to each approach can be evaluated in terms of the other, and in which each can incorporate the other's successes. REFERENCES PI Schank, R.C. Conceptual Information Processing. North-Holland, Amsterdam, 1975. [2] Cullingford, R.E. Script Application: Computer Understanding of Newspaper Stories, Ph.D. Dissertation, Research Report 8116, Computer Science Department, Yale University, 1978. [3] Charniak, E. Ms. Malaprop, A Language Comprehension Program, IJCAI-5, Cambridge, Mass., 1977. c51 b1 c71 [81 I31 cm ml Schank, R.C. Reminding and Memory Organiz- ation: An Introduction to MOPS, Research Report 11170, Computer Science Department, Yale University, 1979. Lehnert, W. et al. BORIS -- An Experiment in In-Depth Understanding of Narratives, Research Report #188, Computer Science Department, Yale University, 1981 Schank, R.C. and Riesbeck, C.K. (eds) Inside Computer Understanding, Lawrence Erlbaum Associates, New Jersey, 1981. Montague, R. Formal Philosophy: Selected Papers of Richard Montague, edited and with an introduction by Richard Thomason, Yale University Press, New Haven, 1974. Dowty, D.R., Wall, R.E., and Peters, S. Introduction to Montague Semantics, D. Reidel Publishing Co., Dordrecht, Holland, 1981. Friedman, J. and Warren, D.S. A Parsing Method for Montague Grammars, Linguistics and Philosophy, 2, 1978, 347-372. Davidson, D. Truth and Meaning, Synthese 17, 1967, pp. 304-323. [4] Wilensky, R. Understanding Goal-Based Stories, Ph.D. Dissertation, Research Report #140, Computer Science Department, Yale University, 1978. 83
1982
61
158
EVENT SHAPE DIAGRAMS D. L. Waltz Coordinated Science Laboratory University of Illinois at Urbana-Champaign Urbana, IL ABSTRACT "Event shape diagrams" are proposed as a representation for capturing the nuances of mean- ing of verbs that describe similar events. These diagrams represent timing, causal relationships between case roles, and typical value ranges for role fillers. Event shape diagrams are expressed in terms of primitive predicates and timing infor- mation that we believe could be computed by per- ceptual systems, and are intended to be a step toward the eventual connection of language systems to perceptual (vision, hearing, and touch) sensing systems. The diagrams are capable of representing modification of verbs by adverbs, can support judgements of the degree of plausibility of vari- ous interpretations of a sentence's meaning, and may be useful in figuring out the meaning of cer- tain kinds of metaphors. A. ljltroduction A satisfactory representation scheme for natural language sentence meaning & offer dis- tinguishably different meaning structures for sen- tences that people judge to differ in meaning, and at the same time it e _tn represent similar sentences similarly. For the most part, natural language representation systems have been much more concerned with capturing the similarity of meaning of various verbs than with representing the nuances in meaning. For example, in Schank's Conceptual Dependency (CD) representation system Cl11 SaL i2di JARI overeat, nibble' g!dlQ, li.Qlf, drink, swig, swallow, i.nb.& and smoke (a-g. a -1 are all represented by structures centered around the "primitive" INGEST. After processing sentences involving these different verbs, the only trace of difference in the meaning represen- tation would be that the objects of the verb would differ; thus, certain objects are liquids (e.g. milk, beer, coffee) while others are non-liquid (e.g. hot dogs, jello, broccoli), drugs (e.g. insulin, heroin) or smokable substances (e.g. tobacco, pot) and a system that knew the nature of various objects might, at least in some cases, be This work was supported in part by the Office of Naval Research under contract N00014-75-C-0612, and in part by the National Science Foundation under grant NSF IST 81-17238. 61801 U.S.A. able to differentiate between drinking, eating, injecting, or smoking types of INGESTS. There is much to be said for lumping these meanings together. Most importantly, similar inferences can be made about such events, for example that the agent's desire for the object may be lessened after INGESTing the object, that the object of INGEST is inside the agent after INGEST- ing, and so on. However, certain important dis- tinctions are necessarily lost: a system that sim- ply substitues INGEST for all the verbs above will be incapable of making certain kinds of predic- tions or inferences that a speaker would expect a person to make. For example, compare (1) "John nibbled at some food" with (2) "Mary wolfed some foodn. As human listeners or readers, we could be expected to infer from (1) that John was either did not feel very hungry or that he didn't like the food very much, and from (2) that Mary was probably very hungry or in a hurry. In the proper context, we could even be expected to understand from (1) that John was upset or depressed, and from (2) that Mary had an urgent task to carry out. While I have only used examples that involve INGEST, similar arguments can be made concerning all the CD primitives. There are other problems with CD as well. As pointed out by Wilks [21, the set of CD primitives is incomplete -- there are no CD primitives for many actions (e.g., divide, construct, bend, join, fasten); and a very wide class of verbs is lumped under the STATE CHANGE primitive. For example, break, chin, crack, deStrOP,damane, and scratch would all be represented very similarly, as STATE CHANGES to a state which is negative with repect to the original state. Moreover, unless I have missed something important, the CD representations foru&sick, hainiured, ucra~~, faUdown, and &indicted could differ from each other and from the previous examples only in the degree of nega- tivity of the STATE CHANGE. In recent years, Schank et al. have shown little interest in repairing the shortcomings of CD, but have concentrated instead on developing larger memory structures, in particular scripts c31, and MOPS 141, each of which may contain a large number of CD structures. These bear on the problems above to some degree. For instance, if ti were potentially attached to a script, then 84 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. from the representation for nJohn ate a meal," a CD-based system could infer that both eating and drinking probably occurred. However, the representation of nuances in general requires a finer-grained representation, not scripts or other larger structures. I do not want the preceding remarks to be taken as an attack on Schank et al. In fact, I have used these examples because CD is probably the most advanced system for representation of actions that has been developed by AI! (The only other candidates that spring rapidly to mind are the LNR representation schemes [5], Wilks* prefer- ence semantics C61, and in a sense, Lehnert's emo- tion modeling constructs 171.1 B. QtfierRelatedBacknround Rieger's ncommon sense algorithm" (CSA) work [81 attempted to enumerate causal relationships between states, actions, and tendencies (such as gravity) for use in representing the operation of physical mechanisms. While very promising in cer- tain ways, CSA diagrams have never been well- integrated with CD, and have not been able to represent timing, quantitative state variable values, concurrency or hierarchical relationships in a satisfactory way. Furthermore, the construc- tion of CSA diagrams is still an art. More recently Forbus [9] has developed QPT ("qualita- tive process theory"), a very promising, interest- ing, and natural body of programmable methods for reasoning about the behavior of physical objects and mechanisms. While Forbus has not to my knowledge attempted to use his work for represent- ing the meaning of languages, the possibilities for doing so are very promising, and I intend to explore them. The spirit of QPT is, I believe, similar to, and compatible with, event shape diagrams. Adverbs (e.g., quickly, softly, hard, sud- denly) have seldom been mentioned in AI papers on natural language understanding. When they have been mentioned [lo] they have been viewed as dif- ficult or impossible to deal with. We have developed mechanisms for dealing with adverbs [Ill and have recently improved upon these mechanisms. Many adverbs (including the ones above) can be represented very naturally in event shape diagrams, though certain manner adverbs (e.g., viciously, kindly, bluntly) still seem difficult. Recent work by Lehnert on summarizing narra- tives C121, by Allen on a ntemporal logic " 1131, by Abelson on the relationships between events, actions, plans, and emotions [llr], and by Talmy on the relationships between grammar and perception Cl51 have all played a part in the development of our "event shape diagrams," summarized below. C. EventShaDeDiaarams In their simplest forms, event shape diagrams have a time line, a scale, and values on the scale at one or more points.* Diagrams can be used to represent concurrent processes, causation, and other temporal relations by aligning two or more diagrams, as illustrated in Fig. 1. '; b a Fig. 1. Event Shape Diagrams for "Eat". Fig. 1 shows the representation for "eat." Note that four simple diagrams are aligned, and that each has different kinds of scales, and different event shapes. The top scale corresponds to the CD primitive INGEST. Causal relations hold between the events described in each simple diagram. Some of the names for the causal relations are adopted from Rieger's CSA work. The action INGEST stops in this default case where "ldesire to eat" goes to zero. nDesire to eat" sums up in one measure coer- cion, habit, and other factors as well as hunger. Typical values for amounts of food, time required to eat, and so on are also associated with the diagram, to be used as default values. More levels of detail can be added if needed. For instance, the action diagram can be expanded so that eating involves many recurrences of put- ting food in one's mouth, biting, chewing, and swallowing, and the diagram for the amount of food inside the agent can reflect a series of stepwise changes as each mouthful is ingested. In the other direction, ti should point to diagrams represent- ing the normal cycle of eating. In the direction of greater detail, I believe that diagrams should eventually "bottom out" in primitive predicates such as aontac_t, surround, h&near, and SUDDOl'_t, *While diagrams are shown here for ease of understanding, data structures must of course be constructed to represent these diagrams. The data structures borrow heavily from Allen cl31 for representing before, after, while, etc. 85 which could be computed for scenes by a vision system; this goal is similar to that of Miller and Johnson-Laird Cl63. Many adverbial modifiers can be represented neatly: "eat quickly" shrinks the value of t -t with respect to typical values; "eat a foto increases values. the values of q -qf above typical Similarly "eat only ha?f of one's meal,,, neat very slowly,,, "eat one bite,,, etc. can be neatly represented. The point of time from which events are viewed can also be clearly represented. Past tense (e.g. "we ate 3 hamburgers") puts "now" on the time line to the right of the action, while future tense puts "now" to the left of the action, and present progressive (e.g. "we are eating") puts "now" between to and tf. AI systems have dealt with models of human belief, expectation, and attitude in only quite simple situations (e.g. 1171). While it is prema- ture to make grandiose claims, the examples I have worked on so far have not presented great diffi- culties for our event shape diagram formulations. For example, Fig. 2 shows the representation of the apparently rather hard sentence, "1 was surprised that John ate so much." not be constructed until the ambiguity was resolved. A possible representation could simply note that the actual and expected amounts were different. The "interest arousal" scale shows up as part of the meaning of many verbs, such as like, en.iov, ha&, Ra!L attention tit desire, fearI, and so on. D. Metaphqll&&EventShaDeN Metaphors can be used to transfer complex combinations of information from one well-known domain to another less well known or completely unfamiliar one. Understanding metaphorical language first requires noting that the language & metaphorical, that is that it couldn't be literal descriptive text. This in turn requires an internal model of what is ordinary, expected, or possible that a system can use to judge the plau- sibility of novel language -- this sort of infor- mation is represented in event shape diagrams by the attached typical values for various scales (see Fig. 1). Next, links must be established between the base domain of the metaphor and the target (novel) domain that the metaphor is being used to describe. The result can become the basis for learning about a new domain (by transferring knowledge from the base domain selectively) or it may simply be that a metaphor allows one to express in a few words many notions about a target domain that would otherwise require a much lengthier exposition (see Ortony [183). Consider for example: (Sl) John ate up the compliments. (S2) Robbie's legs ate up the space between might want the system to infer that in the first sentence John desired the compliments, eagerly "ingested" them with his mind, thereby making them internal and being given by them, and that in the second sentence, the distance between Robbie and Susie was being reduced to zero, just as an amount of food is reduced to zero when it is "eaten up". Fig. 2. Representation of "I was surprised that John ate so much." The structure in Fig. 2 uses the portion of the preceding meaning for eat that is selected by the pattern eat + <quantity>. "Be surprised by" has slots for an agent, an expected event or action, and an actual event or action, which must differ. In this case, we know that John actually ate more than he was expected to eat, so we can fill in his actual behavior in some detail. If the sentence were instead "1 was surprised at how much John ate," it is also possible that John ate less than he was expected to, and the representation could We have been developing methods for making the correct inferences in examples like these (see DeJong and Waltz cl91 for details). The methods depend on matching of the meanings structures, similar to the schema matching of Winston [201. Object meanings are taken to be literal, and are used to suggest candidate verbs for matching. In (Sl) wcomplimentsw suggests "tell" or "hear" as the "true verbs," whereas in (S2) "space,' suggests verbs with underlying PTRANS meaning as the "true verb." For (Sl), then, we would match "eat up,, with "tell" and "hear," select "hear" as a better match, and suggest that the "residue" of eat up (e.g. the portion of its meaning not matched) is intended to be transferred to "hear." Thus, we could infer that John desired to hear the compli- ments, and felt pressure as a result. 86 E. w This work is just beginning. It seems promis- ing, but already it should be clear that event shape diagrams are a a completely general representation, but that they are especially use- ful for representing multiple concurrent processes and functional/causal relationships. Research will continue on the use of event shape diagrams for representation as well as on their range of cover- age. REFERENCES Cl] Schank, R.C. "The Primitive ACTS of Conceptual . Dependency." In Theore iLi3iuainNatural m Procu. R. Schank, B. Nash-Webber e . . Arlington, VA: ACL, 1975. [2] Wilks, Y.A. "Primitives and Words." In . TheoretlcalIssuesinNatural -Processinn. R. Schank, B. Nash-Webber (eds.). Arlington, VA: ACL, 1975, 38-41. [31 Schank, R.C. and R. Abelson. a, m, . Goals, an&Understandlna . Hillsdale, NJ: Lawrence Erlbaum Assoc., 1977. 143 Schank, R.C. "Reminding and Memory Organiza- tion: An Introduction to MOPS." Research Report 170, Computer Science Dept., Yale University, December 1979. C51 Rumelhart, D.E. and D.A. Norman. "Active Semantic Networks as a Model of Human Memory." In Em!20 IJCAI-73. Stanford University, Stanford, CA, August 1973, pp. 450-547. [61 Wilks, Y.A. "A Preferential, Pattern-Seeking, Semantics for Natural Language Inference." Artifi- j&l- 6:l (1975) 53-74. [73 Lehnert, W.G., M.G. Dyer, P.N. Johnson, C-J. Yaw I and S. Harley. "BORIS -- An Experiment in In-Depth Understanding of Narratives." Technical Report 188, Computer Science Dept., Yale Univer- sity, January 1981. [81 Rieger, C. "The Commonsense Algorithm as a Basis for Computer Models of Human Memory, Inferene, Belief and Contextual Language Comprehension." In Theoretipti Issues in Natural m p R. Schank, B. Nash-Webber e . . Arlington,'VA: ACL, 1975, 180-195. [9] Forbus, K. "Qualitative Process Theory." AI Memo No. 664, MIT AI Lab, Cambridge, MA, February 1982. [lo] Cercone, N. "A Note on Representing Adjec- tives and Adverbs." In m. IJCAI-II_. MIT, Cam- bridge, MA, August 1977, pp. 139-40. Cl11 Waltz, D.L. "Toward a Detailed Model of Pro- cessing for Language Describing the Physical World." In m. IJCAZ-&L. Vancouver, B.C., Canada, August 1981, pp. l-5. [121 Lehnert, W.G., J.B. Black and B.J. Reiser. "Summarizing Narratives. w In Proc. IJCAI-&l.. Van- couver, B.C., Canada, August 1981, pp* 184-189. Cl31 Allen, J.F. "An Interval-Based Representation of Temporal Knowledge." In Proc. IJCAI-&l. Van- couver, B.C., Canada, August 1981, pp. 221-226. Cl43 Abelson, R. Personal communication. 1981. 1153 Talmy, 1978 Talmy, L. "The Relation of Gram- mar to Cognition - A Synopsis." In m-2: Theoretia m in Natural w m-2. D.L. Waltz ted.). Association for Computing Machinery and The Association for Compu- tational Linguistics, 1978, pp. 14-24. L: 161 Miller, G.A. and P. Johnson-Laird. Lannuane . PemaLun . Cambridge, MA: Harvard University Press, 1976. [17] Cohen, C.R. and P.R. Perrault. "It's for Your Own Good: A Note on Inaccurate Reference.n In & ments pf Qiscourse . Understandlnn . Joshi, Sag, Webber teds.). Cambridge, England: Cambridge Univ. Press, 1981, 217-230. Cl81 Ortony, A. "Why Metaphors are Necessary and Not Just Nice." Eduw The- 25 (1975) 45- 53. [19] DeJong, G. and D.L. Waltz. "Understanding Novel Language. b&l. SL ComDutersandMathlem.1 special issue, to appear (19821. [20] Winston, P.H. "Learning New Principles from Precedents and Exercises: The Details." MIT AI Memo No. 632, Cambridge, MA, May 1981. 87
1982
62
159
AN EXPLANATION FOR MINIMAL ATTACHMENT AND RIGHT ASSOCIATION Robert IV. Milne Air Force Institute of Technology AFIT,'ENG Wright-Patterson AFB, OH 45433 ABSTRACT This paper describes the principles of Right- Association and Minimal Attachment and explains how the Sausage Machine and ATN decribe these principles. It is then shown that these two models cannot explain these principles. It is then shown that a production system grammar can both describe these principles as well as suggest why they must be true. A. The Sausage _Machine Lyn Frazier and Janet Fodor (FF) in [II have proposed a two stage model of the Human Sentence Parsing Mechanism (HSPM), called the Sausage Machine (SM). FF proposed that the syntactic analysis of sentences by hearers or readers is performed in two stages. The first stage combines words in phrasal nodes as they are received. They call this the "Preliminary Phrase Packager" (PPP) or the ItSausage Machine". The second stage combines these phrases into sentences. This stage is called the "Sentence Structure Supervisortl (SSS). The main principles of Frazier and Fodor are these: 1) Right Association... an ambiguous constituent should be "attached into the phrase marker as a right sister to existing constituents and as low in the tree as possible" 16 p.21111 Cl p. 2941 2) Minimal Attachment,..an ambiguous item "is to be attached into the phrase marker with the fewest possible number of non-terminal nodes linking it with the nodes already present" Cl p. 3201 *This paper describes work done in the Department of Artificial Intelligence and School of Epistemics, Edinburgh University, Scotland Right Association suggests that "terminal symbols optimally associate to the lowest non-terminal node." This predicts the preferred interpretation of: [ll Tom said that Bill had taken the cleaning out yesterday. [21 Joe called the friend who smashed his new car up. [3l John read the note, the memo and the letter to Mary. [41 The girl took the job that was attractive. (from Cl p. 2971) In each of these sentences, the preference is to attach the final modifier to the lowest right node, just as their principle predicts. This principle also predicts the difficulty in the following sentences. 151 Joe looked the friend who had smashed his new car up. 16lJohn read the note, the memo and the newspaper to Mary. 171 The girl applied for the job that was attractive. Their second main point is "Minimal Attachmentll (MA). This says "Each lexical node (or other node) is to be attached into the Phrase marker with the fewest possible number of non-terminal nodes linking it with the nodes which are already present. Cl p. 3201 This principle accounts for the preferred attachment of (for Susan) to the VP in: [ 81 John bought the book for Susan. They suggested that this accounted for the preference for the conjunctive analysis of NP NP in center embedded sentences, the preference for the first clause to be a main one, as in most garden paths, and the preference [5l for "that" as a complementiser rather than a relative clause when after a NP. It even predicts the use of "that" as a determiner over the rrcomplt usage. 88 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. B. The ATN ResDOnSe c. m Production Svstem RePlv Wanner has replied to their claims and defended the ATN as a model of human sentence parsing in 161. Wanner claims that the SM does not model Minimal Attachment and ri,ght Association in a principled and independent way. In this paper he demonstrated that the ATN can. In accounting for this data, Wanner first presented the background of the ATN and presented these possible arc types: WORDarc - analyse specific words such as llthattl or trtoll CAT arc - analyse grammatical categories such as Noun and Verb SEEKarc - analyse whole phrases such as NP,VP or S SEND arc - terminate a network (node) JUMP arc - express optionality C6 p. 2161 He then stated terms of these arcs: the two principles in Right Association: Schedule all SEND and JUMP arcs after every other type of arc. Minimal Attachment: Schedule all CAT and WORD arcs before all SEEK arcs. He presented an argument that shows that this characterisation of the arcs is correct and FF agreed basically with this re-formulation. II21 Wanner has shown that the ATN can describe the strategies of MA and RA, but cannot explain why these strategies are present. So even though the ATN can show sufficient description, it cannot show necessity. Wanner even wonders this when he asked: "Why does the parser employ these strategies as opposed to others?" C6 p. 2331 He then admitted that no clear answer was available to this question for an ATN. Wanner wondered why these are true and had trouble suggesting an adequate explanation of why they are necessary. Milne [3] presents a deterministic ;;;"er, ROBIE, modeled after PARISIFAL Both parsers are deterministic and use' limited lookahead as well as a production system grammar. In the next paragraphs, I will show that not only can ROBIE describe the principles above in the same way, but it can explain why they must be true. We will look at these principles one at a time. In some production systems, the following principle gives the order in which to test each rule: "The most constrained pattern is tried firstI'. For if the rules were tried in the opposite order, the more constrained rules would never be matched. This provides an order for rules of unequal constraint. The following rules will be tried in the following order: [sol [that] -> <actionl> [;;t.titenselessl -> <action2> -> <action3> [noun] [noun] -> <action4> [noun,npll -> <action5> [verb] -> <action6> [tl -> <action'l> A pattern with one word is more constrained than a pattern with two features, since there is only one lexical item that can match the first, but several lexical items that may have, say "tenselesstt. A rule with no pattern [tl will always be the tried last. This is necessary to handle many ambiguity issues. For, if the rules were tried in the opposite order, the more constrained rules would never be matched. I emphasize, this same principle says that all default rules (rules with no pattern) will have lower priority than any other rule. 1. Right Association ROBIE does not have the types of arcs that were listed for the ATN earlier, but the rules can be divided into several roughly similar groups. The equivalent of the SEND and JUMP arcs would be the default rules in a packet (Sub set of the grammar). If something is optional, typically a packet has a rule to handle the marked case and a default rule to handle the unmarked case, that is the default rule has no pattern. All equivalents of the SEND and JUMP arcs will have no pattern in the current parser. Hence, according to our above ordering, these rules will be tried last. Thus Wanner's explanation of Right Association is a necesary result of ROBIE's design. 89 2. Minimal Attachment The deterministic parser has no SEEK arcs. A grammar rule in ROBIE with the pattern Cnpl, does not create a NP. Instead, this pattern will match only if a NP node has already been started. But in the ATN, the arc with the NP on it will cause a push to the NP subnetwork and try to build a NP. Ordering this SEEK arc is the problem under discussion here. On the subject of no SEEK arcs, Marcus states: "The pattern that triggers on a specific constituent, says a NP or an S, does not initiate parsing of constituent Instead of that sort. trigger 'if the pattern will only a constituent of that sort is already in the specified buffer." [4 p.221 If a pattern has the feature NP, this does not make ROBIE try to parse an NP. Instead the pattern will match only if a node with that feature has already been built. This can be contrasted with the SEEK (or PUSH) arc of the ATN. The SEEK arc tries to build a node of the type that was specified on it. SEEK arcs are like recursive subroutine calls. Because ROBIE does not have SEEK arcs, the problems of ordering them are not relevant. The CAT and WORD arcs will be scheduled first as Wanner has shown necessary. MA as characterised by Wanner states that essentially the parser should be data driven and should reflect the incoming words. Another way to understand the principle of Minimal Attachment, is that the word should be used locally if it fits. Since ROBIE has no access to the Active Node Stack, except for the active packets, then it is unable to see if the word could be used higher up. If the word, could be attached to the lower node, then the grammar rules must be written to handle it there. If these rules are there, then the optional use will be grabbed and this will behave exactly as Minimal Attachment. D. Summarv In this paper wehave seenthatwhile the principles of MA and RA can be described by arc ordering, why this order must be used is not obvious. We then discussed that the only possible order of a production system grammar provides the same arc ordering necessary to describe MA and RA. This suggests that RA and MA are merely side effects of a grammar design and not a specific parsing mechanism. Biblionraphl [II Fodor, J. and Frazier, L. "The Sausage Machine: A New Two-Stage Parsing Model", in Cognition 6. (1970) c21 Fodor, J. and Frazier, L, "1s the Human Sentence Parsing Mechanism an ATN?", CoPnition 8. (1980) [31 Milne, R. [forthcoming] "Resolving Lexical Ambiguity in a Deterministic Parser", PhD Thesis, Dept of AI, Edinburgh University 141 Marcus, M. A Theory of Syntactic Recognition fm Natural Language, MIT Press. (1980) II51 Wanner, E., Kaplan, J., Shiner "Garden Paths in Relative Clausestt, unpublished paper, Harvard University. (1975) [61 Wanner, E. "The ATN and The Sausage Machine: Which one is Baloney", Cognition 8. (lg80) 90
1982
63
160
A SYNTHETIC APPROACH TO TEMPORAL INFORMATION PROCESSING* Mark D. Grover TRW Defense Systems One Space Park Redondo Beach, Calif. 90278 ABSTRACT A representation and processing scheme for temporal (time-based) information is presented. Previous computer science approaches to temporal information processing are analyzed. Linguistic analysis of tense, aspect and temporal adverbials provide motivation for an automated general tempo- ral understanding system. A synthetic approach is proposed, combining possible-worlds branching time theory with inertia futures, elements of Montague Grammar, a four-valued logic and the interval se- mantics time model. Key portions of the model are implemented and demonstrated in a PASCAL program. I INTRODUCTION The Artificial Intelligence research commu- nity approaches the natural language understanding problem from two angles: on the computer's terms and on the human's. The former approach encour- ages the design of programs which are competent in narrow linguistic domains. The latter approach currently results in the creation of models for processing language on the broader scale on which humans naturally communicate. The first approach is most useful for technological achievement, and the second for scientific advancement. There are benefits to be gained from both pursuits. This paper describes a fundamental represen- tation and processing scheme for English temporal phenomena based on the second approach cited above. This perspective calls for a synthesis of results of investigations in linguistics, existing epis- temological models of time, and previous efforts in the automated understanding of time-based informa- tion. The model can later be enhanced and refined as these theories advance. The fundamental scheme can be programmed immediately. Modest success in all of the above tasks is reported here. II LINGUISTIC MOTIVATION Many linguists have investigated temporal phenomena of English, such as verb tense, temporal adverbial phrases, and time deictics. Perhaps the most robust and formalized is the work of Dowty[3]. *An expanded version of this paper appears as [7]. Basing his work on the structures of Montague Grammar [4], Dowty examines English temporal phe- nomena through the mechanisms of interval semantics and a branching time theory. In his approach, "truth" is relative to a possible world and possi- ble time interval. Dowty analyzes the following temporally complex sentence and in so doing moti- vates many processing needs. (1.) John was leaving on Thuxsday yesterday. Through linguistic argumentation, Dowty shows the following phenomena occur within this single exam- ple: Past/future relations Adverbial phrase interval bounds Alternate worlds and times Vague event durations Deictics ("now," "Thursday," "yesterday") Futurate Progressive tense/aspect (which entails: expectation, uncertainty) A fully general, automated natural language under- standing system which handles temporal information of English must adequately process these phenomena. Limited attempts to capture each of these time- related features have been made but in disparate research fields. A synthesis of useful elements from each of these fields is described below. III USEFUL AUTOMATION TECHNIQUES Several programs and models have already been constructed to represent and process time-based information of various sorts. Bruce [2] has de- fined a number of interval relations and "tenses" which I do not find convincingly supported by currently accepted linguistic evidence and analy- sis, but also has offered interesting before/after temporal definitions. Findler and Chen [5] have produced a question-answering system for the stor- age and retrieval of "events" of fixed durations using a restriction/connectivity matrix. This approach may be useful in limited applications, but does not attempt to model natural language expressions. Finally, Kahn and Gorry [8] have created a so-called "time specialist." This system is note- worthy for its development of appropriate repre- sentation schemes for varying temporal event rela- tions. Separate data structures are used for 91 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. events linked by dates, by undated temporal order and by reference events. For the first time, events with fuzzy bounds are represented. However, it is open to question how to use three separate knowledge representations concurrently. Kahn and Gorry acknowledge that their effort to produce a temporal idiot savant ignores syntactic and seman- tic time-based knowledge. IV SYNTHESIS AND IMPLEMENTATION Since the understanding of time is intimately related to our understanding of the nature of the temporal events we experience, a more fruitful (longer-term) approach is one in which the proces- sing scheme is based on the information and opera- tors shown to exist by linguistic analysis. This is the second approach discussed in the Introduc- tion--basing automated capabilities on natural (human) linguistic capabilities. Using this paradigm, a model of an automated general temporal understander was constructed. The model centers around those processes and infor- mation schemes motivated by Dowty's linguistic ana- lysis and by branching time theory. The top-level organization appears as Figure 1. SYNTAX,,TREE(S) Direct I Translation I Postulates 6 Lexical (TENSE) LOGICAL I I (to other processors) Bimplification I ESSENTIAL SEMANTIC j ~~~:~%x? )rENIrrdering heuristics Current Events (descriptions) Report of Event Graph pointers > . actions (World/Time) Time unit heuristics . . Clock <Inertial guidance adjustments conflict ri$$zJ CENTER _Requests for mre info I FIGURE 1 A General Temporal Understanding System The assumed input is a sentence of English which can be parsed by Montague Semantics methods as modified by Dowty's translations rules. Fried- man [6] offers such a parsing scheme, although it is presumed that Dowty's tense rules could be added to her work. Friedman also describes sever- al simplification techniques to reduce the result of translation into a more extensional formula. This formula is then reduced further to a proce- dural form, i.e., LISP-11 ke notation in preorder. For example, Dowty's translation for sentence (1.) is (2.) V tl [ PAST(t, C_ yesterday'AAT(t,, PROG [ V t2 [ PAST(t2)AAT(t2, prede- termined' (- V tS [ t3 C_ Thursday'A FUT(t3)r.AT(t3, leave’(j)) I>) I I) 3 where V means "there exists" or "for some" A means "and" n is the intension operator PROG is the progressive operator The procedural form of this statement appears as Figure 2. ( POSIT I: FORSOME TM~ (AND (PAST TM~) (CONTAINED-IN TM~ YESTERDAY) (AT ml [ 01 1 ) 11) I where ~$1 is [ PROG ( FORSOME TM2 (AND (PAST TM2) (AT TM2 [: 02 1 1)) 1 Y where e2 is [ PREDETERMINED-INT (FORSOME TM3 [ +3 ] )] I where b3 is [ AND (CONTAINED-IN TM3 THURSDAY) (FUTURE TH3) (AT TY~ (LEAVE-JOHN)) ] FIGURE 2 Procedural Form of John was leaving on Thursday yesterday. This form is the input to the implemented portion of the model, encompassing those processes below the dashed line of Figure 1. The input is assumed to be a statement (a belief to be posited, i.e., stored) or a question (a belief to be interrogated, i.e., searched for). Of course, system commands can also appear in the input. Such commands are primarily deictic defi- ni tions to be stored in contextual memory, but also are for attention management and I/O control. Statements are processed by storing or moving instances of generic events (written in "Cambridge Polish") on nodes in a directed world/time graph without cycles. In contrast to an outward-branch- ing tree structure, the network allows the system to combine portions of histories which are 92 "identical" because no distinguishing occurrences are known. Although extensively defined in [7], some operators and functions are briefly described here: FORESOME: existential quantification over time. Produces search interval bounds nested to the depth of quantification. AND: conjoins a heuristically ordered list of temporal interval restrictions including deictics, PAST and FUTURE (single bounds) and specific mo- ments. PROG: creates alternate futures (uncertainty) where an event does not "happen" in all branches, but does in inertia futures'(Dowty's term). Inertia worlds'"expected." Expectation is mana ed by a scripted event series' (a Za Schank d. PREDETERHINED: establishes initial link in a chain of scripted events. AT: simple, "primitive" event occurrence in a specific world and time. Additional information stored in the graph allows further processing shown useful by other investigators. For instance, reference counts for each event are stored. This can be used not only for garbage collection, but for memory decay modelling. That is, if a hypothetical past event is considered once, but never again, one can cause that possible event to fade from memory. Schank [op.&t.]). (See also Related to this phenomenon is the allowance of deviant, but expected, events in the script. Further, each node contains an inertia ratio which is used as a rough expectation rnz Thus, a search across worlds at a given moment will be ordered by the amount of scripted information which appears. No attempt was made to consider rapidly changing contexts. Tables of deictic variables (intervals and moments) are kept in contextual memory. These definitions can be changed as the speaker's time changes. They provide links between the event graph and the speaker's world. Question-answering is accomplished by a four- valued belief logic [1, 91. For coherent questions the answers can be: T The event occurs given the worlds and time intervals considered in the question. F The system has been told that the event does not occur. B It has been told both about the occurrence and non-occurrence of the event (in different possible worlds). Z It has no (zero) information relevant to the question. Search and storage/retrieval is accomplished by narrowing down temporal intervals by layered and stacked time bounds. These bounds are established by conjoined deictic and moment references, as well as before/after relations inherited from higher quantified (local) time interval variable bounds. A sample situation in graph form appears in Figure 3. ~~.O*.......................................... Q 0% Beginn ng-of-vmrld t ~=500.5 . . . . . * . . . . . . . 0 d u4 Y3 1.00 . . . . . . . . o.M) . . . . ..I Eat-food-John Sleep-John Bring-food-waiter Leave-Nary Tn3~777.7 . . . . . . . . . . . . . . . . . . . . . . Not-pay-bill-John sep-John Build-house-Vary Not-sleep-John Prepare-food-cook Throw-up-John ,,,I,,F+99.999.................................. FIGURE 3 Sample Situation The implemented portion of the model was accomplished in a 3100-line PASCAL program which includes 65 small procedures and seven files. The complete text of the program and a detailed demon- stration run appears as appendices to [7]. A brief example of the four resultant belief values for elementary question-answering operations appears in Figure 4. 1 nput/Meaniqq Answer With "now" = TM2: (9 (FORSOME T (AT T PAY-BILL-JOHN))) Does John pay the bill? F (Q (FORSOME T (AT T LEAVE-JOHN))) Does John leave? Z (Q (FORSOME T (AT T SLEEP-JOHN))) Doee John sleep? B but -- * * with "now" = TM4: T FIGURE 4 Sample Program Response 93 Notice that as "now" changes, so does the answer to the last question. Such non-monotonicity is a natural and expected result. The model, while certainly incomplete, allows further experimentation with such phenomena as possible translations of temporal adverbial phrases (e.g., since yesterday). As workers such as Dowty provide similar Montague-based definitions, this model can act as a vehicle for examining computa- tional implications. Many problematical cases remain. V CONCLUSIONS Automated understanders of natural language phenomena can be based on processes illuminated by linguists, using organizing schemes of epistemolo- gists, and representation and processing techniques of computer scientists. Such a synthesis allows cautious expansion from a reasonable foundation, Features of the model provide mechanisms to experi- ment with expectation and script construction, memory management (methods of "forgetting"), deixis, unusual tense and aspect combinations, multiple-valued logic, and various "possible worlds" phenomena (e.g., contrafactives and epis- temic modalities). This methodology should also be reassuring to researchers moving from natural language to pro- grams through general model design rather than by attempting to capture only narrow temporal phenome- na or by ignoring useful results in linguistics and epistemology. Cl1 PI [31 [41 c51 &I REFERENCES Belnap, Nuel D., Jr. "A useful four-valued logic." In M. Dunn and G. Epstein, eds., MODERN USES OF MULTIPLE-VALUED LOGIC. D. Reidel Pub- lishing Co. Dordrecht, Holland. Pages 5-37. 1977. Bruce, Bertram C. "A model for temporal refer- ence and its application in a Q/A program." Artificial Intelligence 3:1-25. 1972. Dowty, David R. WORD MEANING AND MONTAGUE SEMANTICS. D. Reidel Publishing Co. Dordrecht, Holland. 1979. Dowty, David R. et al. INTRODUCTION TO MONTAGUE SEMANTICS. D. Reidel Publishing Co. Dordrecht, Holland. 1981. Findler, N. and D. Chen. On the problems of time, retrieval of temporal relations, causa- lity, and coexistence. International Journal of Computer and Information Sciences. 2(3):161- 185. 1973. Friedman, Joyce et al. "Evaluating English sen- tences in a logical model: a process version of Montague Grammar." from the Seventh c71 181 VI II1 01 International Conference on Computational Lin- guistics. University of Bergen, Norway. August, 1978. Grover, Mark D. "A synthetic approach to the representation and processing of temporal phe- nomena of English." Ph.D. dissertation. (North- western University.) University Microfilms. Ann Arbor, Michigan. June, 1982. Kahn, K. and G. A. Gorry. "Mechanizing temporal knowledge." Artificial Intelligence 9:87-108. August, 1977. McCawley, James D. EVERYTHING THAT LINGUISTS HAVE ALWAYS WANTED TO KNOW ABOUT LOGIC* (*but were ashamed to ask). University of Chicago Press. Chicago, Illinois. 1981. Schank, Roger C. "Language and Memory." Cogni- tive Science 4(3):243-284. 1980. 94
1982
64
161
Word Sense and Case Slot Disambiguation Abstract The tasks of disambiguating words and determining Graeme Hirst and Eugene Charniak Department of Computer Science Brown University Providence, RI 02912 case are similar and can usefully be combined. We present two cooperating mechanisms that each work on both tasks: MARKER PASSING finds connections bet- ween concepts in a system of frames, and POLAROID WORDS provide a protocol for negotiation between am- biguous words and cases. Examples of es&t in ac- tion are given. The cooperating mechanisms allow Iin- guistic and world knowledge to be unified, frequently eliminate the need to use inference in disambiguation, and provide a usefully constrained model of disam- biguation. 1. Introduction The problem of determining the correct sense of a lexical@ ambiguous word in context is often seen primarily as one of context recognition, a word being disambiguated to the unique meaning appropriate to the frame or script representing the known or newly- established context. For example: (1) Nadia’s new car is a lemon. The new-ear script/frame/context selects the poorly- made-car meaning for lemon. In practice, of course, there are many complications. This approach would fail with (2): (2) Nadia’s new car is the color of a lemon. In general, word sense can depend not only upon global context, but also (or only) upon the meaning of nearby words. In (3): (3) The bank was robbed. the river-edge meaning of bank can be rejected in favor of the meaning that has an association with rob- bery, without any consideration of global context. The nearby disambiguating words may themselves be am- biguous; a well-known example (Small 1980) is deep pit, where deep can mean profound or extending far down, and pit can be fruit stone or 6oZe in tBe ground. However, only one meaning of each fits with the other, so they are mutually disambiguating. A closely related problem is that of case slot dis- ambiguation. Compare: (4) Ross drove to town with reckless abandon. (5) Ross drove to town with Nadia. Determining that reckless abandon is an instance of the manner case (and hence the manner slot in- herited by the driving framesee Charniak (1981a) for justification), while Nadia is an instance of the accom- panier case, is very like disambiguating the word with. Indeed, we can think of determining the case of subject and object NPs as disambiguating the “prepositions” SUBJ and OBJ that are present in the deep structure, but that in English are deleted before the surface realization. Necessary for word sense disambiguation, then, are mechanisms to find associations between concepts, to handle low-level disambiguation cues, and to handle reconciliation negotiations between ambiguous words. Two mechanisms that cooperate on these tasks are described below. They are MARKER PASSING and the POLAROIDS WORD system. We will show how we use these mechanisms for word sense disambiguation, and how they are also used for case slot disambiguation. 2. Marker passing Marker passing (MP for short) is a mechanism for determining connections in a semantic net of frames. It can be thought of as passing markers around the net- work, from node to node, from slot to filler, under the constraint rules to be discussed below. Marker passing was proposed by Fahlman (1979) for his NETL system,2 'POLAROID ir a trademark of the Polaroid Corporation. From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. though the idea goes back to Quillian (1968). Our marker passer operates independently of, and in parallel with, a parser based on that of Marcus (1980). That is, following only basic morphological analysis, the input sentence goes to both the Marcus parser and the marker passer, both of which separately grind away on each word as it comes in. For example, suppose the input is (6): (6) Nadia’s plane taxied to the terminal. The words plane, taxi and terminal are all ambiguous. The MP places a marker on the frame representing each known meaning of each substantive word in the sentence (including unambiguous ones). Then it fol- lows frame and slot connections from each of these starting points, marking each it passes through. To prevent as many “uninteresting” paths as pos- sible, certain steps are prohibited. For example, a connection could be found between airplane and bowling-ball simply by running up the ISA chain from airplane through vehicle and the like to human-made- object, and then down another ISA chain from there to bowling-ball. This is prevented by the anti-promiscuity rule, which does not allow paths to propagate from nodes with more than 42 connections. This is because nodes with many connections tend to be uninterest- ing ones near the top of the ISA hierarchy-human- made-object, for example. The search is bounded by a depth limit rather than an absolute limit on the num- ber of markers passed; that is, a path may not continue more than d nodes below its starting point.” In the ex- ample above, paths would be found between airplane and airport- building, and airplane and aircraft-ground- travel, indicating that the corresponding meanings of plane, terminal, and taxi should be chosen. Many other paths will also be found; we discuss the problem of FALSE POSITIVES in Charniak (1982), and posit a PATH CHECKER that will filter out many paths that are unin- teresting or silly. 3. Polaroid Words The Polaroid Word mechanism (PW to its friends) is responsible for making the final disambiguation deci- sion for each word. Often, as in the case of (6), this re- quires no more than looking at the paths found by the ‘Marker passing is, of course, expensive when the net is inter- estingly large. Fahlman, who intended uring MP for deduction, proposed super-parallel hardware to back it up. Although OUT scheme is much simpler than Fahlman’s, we too assume that hardware of the future will, like people of the present, be able to derive connections between concepts in parallel. 3 We are still experimenting with constraint rules, and values for n and d. The rules mentioned are examples of those we have considered; we make no claims for their status as Absolute Truth. marker passer. At other times, MP will return noth- ing overwhelmingly conclusive; it is then necessary for PWs to use other information and negotiation between possible meanings. PWs operate in parallel with the parser. As each word comes in to the parser and its part of speech is assigned, a PW process is created for it. (If the word is unambiguous, the process merely announces the mean- ing and uninterestingly hibernates; PWs always knock off as soon as they have narrowed their possible mean- ings to just one.) Meanwhile the marker passer has been looking for connections between the new word and those already seen. The new PW looks at these MP connections, and, if it can, reduces the number of meanings that this instance of the word could repre- sent, It then starts negotiations with preceding words to see if it can further disambiguation, either its own or that of another PW. If still unresolved, it will then sleep until a new word, possibly the bearer of helpful information, comes along.4 A PW, upon creation, knows all the possible senses for its word, and something about the conditions that would support each. For example, the adjective green knows that in its color sense it can only qualify an entity that ISA physical-object. Once awake, a PW proceeds as follows: First, it looks at the decisions that have been announced by the preceding PWs that it can negotiate with. A PW may negotiate with other PWs as follows: a verb with the prepositions and nouns it dominates; a preposition with the noun of its preposi- tional phrase, and with other prepositions; and a noun or verb modifier with the word it modifies. (All these negotiation paths are bi-directional.) An announce- ment may take the form “I mean X and so my friends should be Y”, or “I could mean X if my friends are Y, or I could mean Z and set no conditions for that”, and so on. Next, the PW looks at the MP chains that have been found between any of its possible meanings and those of the preceding words of the sentence. A strong chain (where strength is inversely proportional to length, though the types of the links traversed are also considered) is prima facie evidence that its starting point is the right meaning for the word. From this in- formation, the PW eliminates any meanings that don’t 4The name Pulavoid Word derives from the fact that the meaning of each word develops right in front of the parser’s eyes, and the print can be handed around by the parser, its state of develop- ment a matter of indifference as long as the meaning of the word isn’t needed to guide the parse. And if the meaning is needed (the parser having occasionally to resort to semantic advice), an examination of a half-developed image may suffice. 96 suit its friends .5 Of the meanings that remain, MP may have suggested one strongly enough for the PW to- confidently announce it as the word’s sense; other- wise it will announce all its remaining possibilities, and their conditions. It then rests a while. The announcement wakes up other PWs that have not yet made their final decision, and each sees whether the new information-both the new word’s announce- ment, and any MP chain between it and the new word-helps it make up its mind. If so, it too makes an announcement, again awakening its friends. This continues until none can do any more. Then the next word in the sentence comes in, its PW is created, and the sequence is repeated. Let’s consider this example, concentrating on the subordinate clause: (7) Ross found that SUBJ the slug would operate OBJ the vending machine. Note the insertion of the “prepositions” SUBJ and OBJ. We want to work out that the slug is a metal stamping, not a gastropod, a bullet or a shot of whiskey; that the frame operate refers to is cause-to-function, not performmsurgery; and that SUBJ and OBJ indicate the slots instrument and patient respectively. The first words are SUBJ and slug; their PWs have not yet enough information to do anything in- teresting, nor has marker passing produced anything. Then operate comes along, and tells the others that, inter alia, it might have a meaning that ISA transitive- action, or then again it might have a meaning that ISA intransitive-action. From this, and from the slot- filler requirements for agent and instrument in the transitive-action and intransitive-action frames, the SUBJ PW now realizes it has three possibilities? operate is transitive and SUBJ indicates the agent, which should be animate and sentient. operate is transitive and SUBJ indicates the in- strument, which should be inanimate. operate is intransitive and SUBJ indicates the agent, which should be animate and sentient. It announces this to the world. The PW for slug can now reconcile this with its own options. It finds that none of its possible meanings fall in the ISA hierar- chy under sentient-entity, and announces this. Now SCJBJ can eliminate all but the second of its potential meanings, and when it arrives OBJ can immediately decide that it indicates the patient slot. 51f it finds that this removes a sense strongly suggested by MP, then it is in trouble; a likely reason is that the word is being used metaphorically, a situation we do not attempt to handle. 6 At present, we ignore the possibility of passive sentences. The noun phrase vending machine now arrivesI bringing with it an MP chain that might be vending- machine-coin-metal-disc-slug. This is enough for the slug PW to favor metal-disc as its meaning, and all words are now disambiguated. Now consider this example, in which marker pass- ing is not used at all: (8) SUBJ the crook operated OB J a pizza parlor.8 This proceeds as before, until the SUBJ PW has laid out its three options. Since crook can be either sen- tient (a criminal) or not (a shepherd’s staff), it is unable to make the move that in the previous example disam- biguated operate. However, when OB J comes along, the operate PW can immediately eliminate the intransitive possibility. However, after the last word is processed, the PWs’ negotiations reach a standstill with crook still undisambiguated. If it happens that at the end of the sentence one or more words are not fully disambiguated, then knowledge of a PREFERRED or DEPRECATED MEANING is invoked for them. Preferred and deprecated mean- ings are indicated as part of the dictionary entry for each word; a word can have zero or more of each, and a meaning need not be either. The possibilities that remain are ranked accordingly, and the top one or ones are chosen. If there are two or more, the word is am- biguous; inference (see next section) may be deployed if desired. It will be apparent that our use of slot-filler con- straints is not dissimilar to conventional Katz and Fodor (1963) selectional restrictions. The difference is that our constraints are not just symbols in a diction- ary entry, but rather are part of the knowledge base. That is, we don’t mark the word slug as [&animate]; rather it is part of our world knowledge that the frame gastropod-without-shell represents something that in- herits properties from the animate-entity frame that stands above it in the ISA hierarchy. There are two advantages in this. First, we avoid arbitrary and syntactically unmotivated selec- tional restrictions; it seems more natural to require the knowledge base to decide whether the object has the necessary properties, using whatever proof techniques may be necessary. Second, we now have a much more unified representation of knowledge, both world and linguistic. We do not, for example, store in one place the fact that the doer of an action must be an sentient 7We assume that it is recognized as an unambiguous canned phrase representing a single concept. 8We claim that this ir exactly the same meaning of operate as h the previous example: cause-to-function. In a context like this, the action described is generic, a matter we ignore. 97 entity, and in another that the agent of the verb rep- resenting that action must be sentient. Rather, the frame system contains the information but once, and it can be used wherever it is needed. This is in ac- cord with the goal of the project of which the present work forms a part: a uniform knowledge representation suitable for both natural language understanding and problem solving (see section 5). 4. What we can’t do It should be clear that a combination of marker passing and Polaroid Words is not a replacement for inference and pragmatics in word sense and case disambiguation; rather, it serves to substantially reduce the number of times that these must be employed. In particular, of course, it eliminates most cases where inference would be employed to find contextual connections. 5. Existential considerations Both MP and PW exist at present only in prototype implementations. We are writing more sophisticated versions of each as part of our group’s development of an integrated language-understanding and problem- solving system (Charniak 1981b, Wong 1981a,b), and will try out different MP constraint rules and PW ne- gotiation protocols to find which are best. Our use of MP will not be limited to disambigua- tion; we also expect higher semantic levels of the sys- tem to use it for determining context and finding causal links between events-see Charniak (1982) for discus- sion. Polaroid Words are implemented as processes that interpret Lisp data structures that each word has in its dictionary entry, containing purely lexical information. We anticipate developing a simple, constrained decla- rative language, similar in spirit to Marcus’s PIDGIN (1980), for writing Polaroid Words. This is in con- trast to approaches such as Small’s (1980), where the meaning of a word is represented as a large, barely- constrained procedure. We avoid this by letting the parser and the marker passer do much of the work that Small requires his “word experts” to perform. 6. Conclusion We have presented a pair of cooperating mechanisms that both disambiguate word senses and determine case slots by finding connections between concepts in a net- work of frames and by negotiating with one another to find a set of mutually satisfactory meanings. This approach permits linguistic and world knowledge to be combined in a single representation, and, in turn, is to be part of a system in which knowledge for lan- guage understanding and problem solving are similarly unified. References CHARNIAK, Eugene (1981a). “The case-slot identity theory.” Cognitive Science, 5(3), July-September 1981, 285-292. CHARNIAK, Eugene (1981b). “A common repre- sentation for problem-solving and language-com- prehension information.” Artificial Intelligence, M(3), July 1981, 225-255. CHARNIAK, Eugene (1982). “Passing markers: A theory of contextual inference in language com- prehension.” Technical report CS-80, Department of Computer Science, Brown University, Prov- idence, RI 02912. FAHLMAN, Scott (1979). NE’I’L: A system for rep- resenting and using real-world knowledge. Cam- bridge, MA: The MIT Press, 1979. KATZ, Jerrold Jacob and FODOR, Jerry A (1963). “The structure of a semantic theory.” Language, 39(2), 1963, 170-210. MARCUS, Mitchell P (1980). A theory of syntactic recognition for natural language. Cambridge, MA: The MIT Press, 1980. QUILLIAN, M. Ross (1968). “Semantic memory.” in: MINSKY, Marvin Lee (editor). Semantic Information Processing. Cambridge, MA: The MIT Press, 1968. 227-270. SMALL, Steven (1980). “Word expert parsing: A theory of distributed word-based natural language un- derstanding.” Doctoral dissertation [available as technical report TR-9541, Department of Computer Science, University of Maryland. WONG, Douglas (1981a). “Language comprehen- sion in a problem solver.” Proceedings of the 7th International Joint Conference on Artificial Intelligence, Vancouver, 24-28 August 1981. 7-12. WONG, Douglas (1981b). On the unification of lan- guage comprehension with problem solving. Doc- toral dissertation [available as technical report CS-781, Department of Computer Science, Brown University, 1981. 98
1982
65
162
WORD LEARNING WITH HIERARCHY-GUIDED INFERENCE* David M. Keirsey** Department of Information and Computer Science University of California, Irvine Irvine, California ABSTRACT A technique for learning new words is discussed. The technique uses expectations generated by the context and an ISA hierarchy to guide the inference process. The learning process uses the context of several independent occurrences of the word to converge on its meaning. I INTRODUCTION Many of the words we know are not learned by having someone else tell us the definition or by looking them up in a dictionary; rather, we learn the words by extracting the meaning from the context. The learning of a word by context usually occurs over several instances. Several researchers [II, 81 have made attempts to learn new words from context, however, their programs did not try to extract the meaning over several occurrences of the word. Moreover, the first encounter with a new word usually gives only a vague meaning, because there are several possible interpretations. However, on successive encounters with the word, more can be learned about the word. The meaning becomes more refined by the addition of attributes extracted from context. Using the attributes to guide the selection of the definition of the word, we converge onto a known concept or idea that closely reflects the meaning. For example, consider the following three sentences, each having no connection between them other than the unknown word. 1. John picked up the foobaz. 2. John wrote with a foobaz. 3. John used an ink foobaz. *This research was partially supported by grants N66001-81-C-0086 and N00123-81-1708 by the Naval Ocean Systems Center **Now at Hughes Research Labs It should be clear from the three examples that the meaning of foobaz is closely related to a Pen. Each sentence adds new information to the word foobaz, but the sentences alone are not sufficient to conclude the concept because foobaz could have been some other reasonable concept, such as: John wrote with a frenzy; John used an ink blotter. Application of NLP techniques for building real prototype systems has a major stumbling block in that current NLP systems have handcrafted vocabularies. That is, each word must be known to the system a priori. This problem is especially evident in the Navy tactical message domain [63. Navy communications send a large number of messages from ship to ship containing natural language text that report on their standard operations. Message understanding by machine is possible because the messages deal in a limited domain. However, the messages have many words that are acronyms, abbreviations, and generated names (e.g. call names), which an NLP system will not know at the time. Hence, a NLP system that learns words automatically is highly desired. II REFINEMENT BY USING KNOWLEDGE Expectation-based understanding [7, 2, IO] provides the system with information that can be useful in predicting the meaning of an unknown word [43. These expectations are used as semantic restrictions on what should occur, When an unknowr word is encountered, the expectations can serve as a hypothesis to the meaning. For instance, in one of its definitions, the verb phrase "picked up" has the semantic restriction on its direct object that it be a physical object. So in sentence one: John picked up the foobaz. a hypothesis could be that foobaz means "physical object." Physical object is a very general concept, and it is more likely the real intended meaning is more specific. There are an infinite number of specializations of physical object that foobaz could be. The learning of the important attributes in refining 99 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. the meaning of foobaz needs some guidance. General learning systems C9, 31 have had a problem of determining which set of features are important to generalize or differentiate on. The basic problem with the general learning approach is the lack of knowledge. One form of knowledge representation useful in guiding the refinement of meaning is the ISA hierarchy. The ISA hierarchy, a well-known AI representation [ll, provides an organization of concepts in which specialized concepts shere properties of general concepts through inheritance, and also add properties of their own. The problem is to relate the unknown word with a known concept in the hierarchy. The hierarchy provides an organization for the merging of expectations gathered by context. A. Hierarchv-guided deduction Consider the following three sentences taken from three different Navy message reports of their activities. 1. Our aircraft overflew FRAMUS. 2. We saw running lights of FRAMUS. 3. Our ship overflown by aircraft of FRAMUS. After sentence one, the inference can be made that FRAMUS is a SURFACE-CRAFT. The inference comes from the semantic restriction that the object can be overflown. That is, ships and subs are the only things that can be overflown in the Navy world. After sentence two, the FRAMUS should be a SHIP. This inference comes from the fact that ships are the only surface craft that have running lights, whereas submarines do not. The third sentence uses the fact that the carriers are only ships that possess aircraft. Notice that aircraft can be possessed by a carrier, a base, or a mission, so the restrictions gained by sentence one and two do restrict the possible meaning of FRAMUS in sentence three. Figure 1 graphically represents the progressive refinement of the word FRAMUS: the numbers indicate the node FRAMUS was attached to after encountering each sentence. The ISA hierarchy is,used to guide the refinement of the Word meaning. Each descent of the tree adds more information to what the word means. That is, the lower nodes have more attributes, facts, and inferences attached to them, many of which are obtained by inheritance from higher concepts. F CARRIER3-FRAMUS SHIP2/DESTROYER / ----CRUISER /MILITARY-CRAFT i \ AIRCRAFT/ HELICOPTER lAIRPLANE i /AGENT--;;;;;;; 'HYSICAL-OBJECT RADAR ,SENSOR / VISION SONAR DEVICE \ WEAPON GUN MISSILE TORPEDO PLACE MILITARY-INSTALLATION OCEAN HARBOR Figure 1: Hierarchy-guided Deduction B. Tree Intersection Suppose the encounters with the word happened in a different order. For example: 1. 2. 3. Our ship overflown FRAMUS. Our aircraft overflew We saw running lights FRAMUS. of FRAMUS. method of intersecting The hierarchy provides a by aircraft of possible hypotheses. In this example the first hypothesis is that FRAMUS is either a carrier, a base, or a mission. The second message confirms that FRAMUS is a carrier, because the inference is that FRAMUS ISA ship from the semantic restriction of overflew and a carrier ISA ship. The other hypotheses are eliminated by the fact that neither a base or a mission ISA ship. The third message only confirms the hypothesis of FRAMUS being a carrier and does not add any more information. Changing the order in other ways eliminates hypotheses in different orders, but the result, FRAMUS ISA carrier, is the same. C. Hierarchv-guided induction Hierarchy guided induction, i.e., refining a word by traversing m the hierarchy tree, occurs when the initial guess of the meaning is more specific than the actual meaning. For example, take the following sentences: 100 1. TO1 identified as Kashin. 2. TO1 identified as a ship. 3. TO1 identified as a surface craft. (Ka shin is a known word class of Russian destroyer.) naming a particular Figure 2 illustrates the refinement of TO1 by traversing up the tree. CARRIER 2/ SHIP DESTROYER'-KASHIN / SURFACE-CRAFT \ CRUISER MILITARY-CRAFT3-TO1 SUBMARINE \ AIRCRAFT/HELICoPTER lAIRPLANE Figure 2: Hierarchy-guided Induction From the first sentence it can be inferred that TO1 is at least a one step above a Kashin (a Russian destroyer) in the classification hierarchy by noting that if TO1 can be identified as a Kashin, then a TO1 could be identified as another Russian destroyer other than a Kashin. With the same reasoning, TO1 in the second sentence must be more general than a ship, for TO1 could have been identified as a submarine, i.e., any surface craft. In the third sentence, likewise, a TO1 must be some type of military craft, i.e., an aircraft or a surface craft. (TO1 is an acronym for Target Of Interest.) The use of both deduction and induction methods on a word could lead the hypothesized meaning to move up and down the tree indefinitely. The method of never moving a unknown word down the tree once the word has moved up the tree, prevents this vacillation from occurring. For example, if the sentence: "Diesel TO1 detected" was encountered after the previous three sentences, then normally the rule can be used that only submarines are referred to as diesel, and therefore TO1 must be a submarine. However, since TO1 had be raised to MILITARY-CRAFT by induction, this rule cannot be invoked, because the rule would move TO1 down the tree. D. Detecting Errors In the Navy tactical messages, the sentences are more often than not ungrammatical. Ungrammatical sentences can sometimes can confuse the reader as to the usage of a word. Learning from an unknown word used ungrammatically can lead to eliminating all the hypotheses as to the meaning. Consider the following three sentences from three messages. 1. Carrier launched two HS helicopters. 2. HS gained contact on sub. 3. One helicopter from HS ship. sighted hostile The hypotheses generated from sentence one are: HS is a type of helicopter, HS is a attribute of helicopter, or HS is some type of action that helicopters perform. Sentence two reduces the hypothesis to HS being a type of helicopter. Sentence three generates the expectations that HS is a ship, a base, a mission, or a task force. Since the intersection of current hypothesis and expectations is nil, there has been a mistake. As it turns out, the first sentence is grammatically incorrect. The correct sentence is: Carrier launched two helicopters of HS. HS actually stands for helicopter squadron, which is a task force of helicopters for the carrier. The expectations generated by the sentence are that HS is an adjective, noun modifying a noun, or a verb acting as a modifier, but not that HS is possessive, which is the proper syntactic construct needed in this instance. Thus, the incorrect expectations were generated. Unfortunately, there is no guarantee that syntactics errors will be detected; nevertheless, the technique has a chance to detect them. E. Knowledge-based learning This knowledge-based learning, using ISA hierarchies as the knowledge base, has the advantage that the knowledge can guide the learning process; the disadvantage is that most of the knowledge must be built in or acquired another way. The techniques being automatic (i.e., not interactive), they do not substantially build on the hierarchy tree. That is, the techniques only associate an unknown word to a concept already fully represented and do not try to form a totally new concept. However, other researchers have used hierarchies to do a form of knowledge acquisition 151, and a combination of the two methods could significantly add to the power of each. III CONCLUSIONS The ISA hierarchy is a useful representation that can guide the inference process in learning new words. This technique assumes that it takes several encounters with the new word to acquire a reasonable meaning of the word. It also assumes that the basic concept is already known. This technique works well with most concrete nouns which can be easily represented in a hierarchy, but it does not work as well with word types that are not expressed as well in hierarchies, such as verbs. It is envisioned that other techniques of knowledge acquisition must be used in conjunction to acquire these words and concepts. 101 [II [21 iI31 141 [51 [61 [71 L-83 [91 [lOI REFERENCES Brachman, R. J. On the EDistemolonical Status of Semantic Networks. BEN Report 3807, Bolt Beranek and Newman Inc., April, 1978. Cullingford, R.E. ScriDt ADDliCatiOn: ComDuter Understanding of newsoaner stories. Technical Report, Yale University, 1978. Dietterich, T.G. and Michalski, R.S. Learning and Generalization of Characteristic Descriptions: Evaluation Criteria and Comparative review of selected methods. In Proceedinas of the Sixth International Joint Conference on Artificial Intelligence. International Joint Conference on Artificial Intelligence, August, 1979. Granger, R.H., Jr. FOUL-UP: A Program that figures out meanings of words from context. In Proceedings of the Fifth International Joint Conference on Artificial Intelligence, pages 172-178. MIT, International Joint Conference on Artificial Intelligence, 1977. Haas, N. and Hendrix, G. G. An Approach to Acquiring and Applying Knowledge. In First National Conference on Artificial Intelligence, pages 235-239. August, 1980. Keirsey, D. M. Natural Lanlszuaae Processing aDDlied to Navv Tactical Messages. Technical Document 324, Naval Ocean Systems Center, February, 1980. Schank, R.C. Conceotual Information Processing. North Holland, Amsterdam, 19'75. Selfridge, M. A Computer Model of Child Learning. In The First Annual Conference on Artificial btellinence. AAAI, 1980. Vere, S.A. Induction of Relational Productions in Presence of Background Information. In Proceedings of the Fifth International Joint Conference on Artificial Intelligence, pages 349-355. MIT, August, 1977. Wilensky, R. Understanding Goal-Based Stories ---* Research Report 140, Department of Computer Science, Yale University, September, 19'78. 102
1982
66
163
Taking to UNIX in English: An Overview of UC* Robert W~Lemky Division of Computer Science Department of EECS University of California, Berkeley Berkeley, Ca. 94720 1. Introduction UC (UNIX Consultant) is an intelligent natural language interface that allows naive users to com- municate with the UNIX** operating system in ordi- nary English. UC is currently capable of handling simple dialogues, of which the following is representa- tive: Dialogue 1 User:How can I delete a file? UC: typing ‘rm filename’ will remove the file with name l%ename from your current directory User:Move a fY.le? UC: typing ‘mv filename newfilename’ will move the file with name filename to a file with name newfilename User:How might a f3e be created in UNIX? UC: the best way to create a new file is to use the editor, i.e.- ‘ex filename’. User:A directory? UC: typing ‘mkdir dirname’ will create the directory ‘dirname’ in the current directory. User:How can I find out how full the disk is? UC: typing ‘df’ will tell you how full the disk is. Users:How should I find out who is on the system? UC: typing ‘who’ will tell you who is connected to UNIX. UC is able to handle most variations of such requests. For example, asking “What is a way to remove a file?” or “How does one delete a file?” will return the same answer as is given above to “How can I delete a file?” In addition, UC can handle some requests that require greater awareness of the context generated by a dialogue. For example, consider the following dialogue with UC: * This research was sponsored in part by the Office of Naval Research under contract N00014-80-C-0732 and the National Science Foundation under Grant MCS79-06543. ** UNIX is trademark of Bell Laboratories Dialogue 2 User:How do I print the file fetch.1 on the line printer? UC: To print the file fetch.1 on the line printer type ‘lpr fetch.1’. User:Has the file fetch.1 been printed yet? UC: The file fetch.1 is in the line printer queue. User:How can I cancel it? UC: To remove the file fetch.1 from the line printer queue you must type ‘lprm fetch.1’. In this example, UC determines that ‘it’ refers to the request to print a file rather than some more recent!y issued command, because the immmedi- ately preceding dialogue has brought this request into focus. UC is also being extended to handle requests requir- ing more complex reasoning to formulate an intelli- gent response. 2. The Problem Domain The problem of naive user communication with an operating system provides an appealing domain for study and application of a number of Artificial Intelli- gence techniques. The domain is complex enough to provide substantial sub-problems, but not so unbounded that a useful working system must pos- sess a hopelessly large repertoire of howledge. The task involves a quite realistic use of natural language, namely, participating in a directed dialogue. In addition, UC provides an application of natural language processing that people would actually be willing to use. Some common uses for natural language processing, for example, natural language front ends to data bases, probably do not have this property. Once a user has learned a somewhat more cryptic query language, he is apt to prefer it over natural language as it is likely to provide a simpler and less verbose mode of expression for the majority of queries. In contrast, a naive user of a system would probably be willing to converse in English with a machine, since the alternatives are apt to be worse. Manuals and on-line help facilities are only marginally useful - a user usually needs to be fairly sophisticated in order to issue the right help command, and then these are successful only a fraction of the time. Peo- ple knowledgeable enough to be helpful are not 103 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. always offer. available and usually do not have the time to In addition, the domain is “soft” in the sense that a system that did not work all the time is still likely to be useful. For example, if UC failed to give the desired answer, the user is no worse of? than he would be otherwise, and still has the normal repertoire of options at his disposal. Of course, if UC does provide a useful answer, the user will benefit from it. Thus UC is an AI system that will be useful even as it expands to a larger portion of its domain. Lastly, the problem integrates a number of areas of concern for artificial intelligence, including natural language understanding, natural language produc- tion, planning and problem solving. 3. The Structure of UC UC runs on a VAXll/780 and is programmed in FRANZ LISP and in PEARL, an AI language developed at Berkeley (Deering et al., 1981). Although UC is itself a new system, it is built out of a number of com- ponents, some of which have been under development at Berkeley for some time. The natural language understanding component consists of an extended version of PHRAN (PHRasal ANalyzer), original pro- grammed by Yigal Arens on a DEClO and moved to the VAX by Margaret Butler. The natural language production is to be done by PHRED (PHRasal English Diction), PHRAN’ s sibling language processing pro- gram, but as of this writing PHRED has not been moved to the VAX. These programs are described in Wilensky (198 lb). PHRAN reads sentences in English and produces representations that denote their meanings. One of the primary design goals of PHRAN is that it be easily extensible to new linguistic forms and domains. This is particularly important for implementing UC, as we expect to be continually extending the system’s scope. Applying PHRAJJ to the domain of requests about UNIX was as much a test of PIiRAN’s versatility as it was a useful application. In fact, most of the forms used by UC are new to PHRAN, and were added to PHRAN’s knowledge base of English by David Chin. The approximate amount of effort to extend PHRAN to handle one of these new forms, once the represen- tation used to denote the meaning of the form has been agreed upon, is currently on the order of a few minutes. Right now, the PHRAN component of UC can understand requests about 25 substantially different topics, each in many different linguistic formats. One of the extensions to PHRAN being realized in UC is a mechanism to handle some forms of ellipsis, implemented by Lisa Rau. This mechanism handles both intra-sentential forms, such as “I want to delete the small file and the large”, and inter-sentence forms, as in “How do I delete a file? A directory?” Ellipsis is handled by first letting the basic PHRAN analysis mechanism produce what it can from the input. PHRAN is a pattern-based system whose pat- terns map various levels of linguistic structures into meaning representations. Thus the process of under- standing an utterance leaves a history of the patterns used to arrive at that understaning. If the result of this process is incomplete (e. g., generally something that is not a sentence where a sentence is expected), then previously used PHRAN patterns are examined to see if they match the patterns used to understand the fragment. If so, then the words of the fragment are substituted for the words of the previous sen- tence that correspond to the common pattern. The resulting linguistic unit is then re-analyzed by PHRAN. 3.1. Context in UC Another extension to PHRAN involves the implemen- tation of a context mechanism by Yigal Arens. This is essentially an activation-based memory mechanism used by UC to keep track of the focus of the discourse (cf. Grosz 1977), handle some aspects of reference, and help in word disambiguation. When a concept is encountered by PHRAN, it is given a level of activation; concepts stored in memory as being associated with this concept are also given a level of activation. AS processing proceeds, the activation levels are manipulated according to an algorithm described in Arens (1982). For example, in Dialogue 2 above, the question preceeding the cancellation command brings back into focus the previous request to print the file. That is, asking this question increases the activation level of the memory of that request. Then when the last request is issued, UC will try to interpret ‘it’ in this new context. Since the request to print the file is now more highly activated than any intervening request, UC will determine that ‘it’ refers to the print request even though this request is not the most recent. 3.2. Processing Reqyests in UC After a meaning representation for a sentence has been produced by PHRAN, a reasoning process comes into play. For the simplest requests, UC simply uses the meaning representation of the request as a query into an associative knowledge base of facts to see if there is a stored answer. This knowledge base, and in fact, much of UC, is implemented in PEARL (Package for Efficient Access to Representations in LISP). PEARL is an AI language developed with flexibility and time and space efficiency in mind. By using PEARL, it is possible to assert new facts into the knowledge base and have UC use these facts quickly in its vari- ous response formation processes. PEARL is described in more detail in Deering, Faletti, and Wilensky (198 1, 1982). For example, the following is a simplified version of the PEARL representation of the fact that using the ‘rm’ command is a way to delete a file: (planfor (concept (causation (antecedent (do (actor ?X))) (consequent (state-change (actor ?F/is-file) (state-name physical-state) (from 10) (to -1w>>> (is (use (actor ?X) (command (name rm) (arg ?F)) Questions of the form “How can I delete a file?” are analyzed into a form similar to this, but with the “is” slot filled with a variable. PEARL’s retrieval facility finds the stored fact based on this similar structure; it can then be used to supply an answer (right now, canned text associated with these representations is used because the PHRED generator does not yet work on the VAX). 4. Extending UC to Process More Complex Requests Most requests require more complex processing, how- ever. For these situations, UC uses a reasoning com- ponent based on the PANDORA planning mechanism (Wilensky 1981a). PANDORA, implemented by Joe Faletti, is based on a model of planning in which goal detection and goal interactions play a prominent role. For example, consider the indirect request: “I need some more disk space.” A literal response to this remark might be “Type ‘r-m *‘I’, which is most likely not what the user had in mind. The problem with this response, of course, is that it violates an unstated user goal, namely, that the user wants to preserve what he has already put on the disk. An intelligent consultant must be able to infer such goals, and reason about the interactions of such goals with those explicit in the user’s request. In this example, an implicit goal (preserving a file) may conflict with the stated goal (getting some more space), and this possibility must be explored and dealt with. Although it was originally constructed to be an auto- nomous planner, PANDORA’s architecture is well suited for this sort of reasoning. PANDORA first tries to apply a stored plan to a given goal. It then simu- lates the situation that may result from the current state of the world using a mechanism called a Projec- tor. In the above example, the simulation will reveal, among other things, that some liles will get des- troyed, as this is a consequence of the ‘x-m’ com- mand . Another of PANDORA’s basic components is called the Goal Detector. This mechanism determines the goals the planner should have in a given situation. The goal detector is essentially a collection of demons that respond to changes in the environment, including the simulated environment created by the projector. In this example, when the simulated future reveals the possible destruction of a file, the goal detector will react to this possibility by inferring the goal of preserving this flle. Since this preservation goal arises from a plan of the user’s, PANDORA also infers that there may be a goal conflict between this goal and the goal underlying the user’s original request. PANDORA makes this infer- ence by considering a goal giving rise to a preserva- tion goal as another situation in which to detect a goal (namely, the goal of resolving the goal conflict). Then a plan for this “resolve-goal-conflict” goal can be sought by successive application of the whole plan- ning process. is, a goal whose successful execution will result in a better plan for other goals. This formulation allows the goal detection mechanism to be used to solve the problem of goal confict detection, and the normal planning process to flnd a resolution to such a prob- lem. More detail on meta-planning and the associ- ated algorithms is given in Wilensky (1981a). In the example at hand, the presence of a goal conflict is only a possibility, as the user may well have some flies that he doesn’t need. A general strategy in such situations is to determine whether the possibil- ity actually exists. This would lead to the generation of the question “Do you have any files that you do not need?” If the user’s response is negative, then the conflict does in fact exist, and a conflict resolution strategy must be employed. A strategy that is applicable to all conflicts based on a shortage of resources is to try obtaining more of the scarce resource. In the example above, the scarce resource is disk space. PANDORA would then create a new goal of obtaining more disk space. Since a stored plan for this goal is to ask the systems manager for more room, UC can respond with the advice that the user request more room from the sys- tems manager. An implementation of PANDORA and a fuller explanation of its role in UC is found in Faletti (1982). Of course, it is possible to store requesting more room as a plan for the original goal, and by-pass all this complex reasoning. The problem with such a solution is that it would preclude the possibility of informing the user that he could delete a file that he doesn’t need. Alternatively, we could make a more complex canned plan that checks for just this situa- tion. That is, the stored plan would be to delete a file one doesn’t want, and, if this fails, then to request more space. The problem here is that it would be necessary to include all possible interactions in the canned plan. While it may be desirable to include some of these some of the time, to deal with unantici- pated interactions, a more general reasoning mechanism is required. For example, if the systems manager is unavailable and the user’s desire for the disk space is great enough, deleting a file may still be a reasonable solution; however, it is unlikely that such a possibility would be anticipated by the canned plan. References Arens, Y. 1982. The Context Model: Language Understanding in Context. In the Proceedings of the Fourth Annual Conference of the Cognitive Science Society. Ann Arbor, MI. August 1982. Deering, M.. Faletti, J., and Wilensky, R. 1981. PEARL: An EfIicient Language for Artificial Intel- ligence Programming. In the Proceedings of the Seventh International Joint Conference on Artificial Intelligence. Vancouver, British Colum- bia. August, 1981. This algorithm makes use of a metaplanning representation for planning strategies. The goal of resolving a goal conflict is actually a meta-goal, that Deering, M., Faletti, J., and Wilensky, R. 1982. The PEARL Users Manual. Berkeley Electronic Research Laboratory Memorandum Ito. UCB/ERL/M82/19. March, 1982. Faletti, J. 1982. PAKDORA -- A Program for Doing Commonsense Planning in Complex Situations. In the Proceedings of the National Conference on Artificial Intelligence, Pittsburgh, PA. August, 1982. Grosz, B. J. 1977. The Representation and Use of Focus in a System for Understanding Dialogs. Proceedings of the fifth International Joint Conference on Artiflcixz! intelligence. Carnegie- Mellon University, Pittsburgh, PA. Wilensky, R. 1981a. Meta-planning: Representing and using knowledge about planning in prob- lem solving and natural language understand- ing. Cognitive Science, Vol. 5, No. 3. 1981. Wilensky, R. 1981(b). A Knowledge-based Approach to Natural Language Processing: A Pro- gress Report. In the Froceedtngs of the Seventh bLterrLation& Joint conference on Artificial Intelligence. Vancouver, British Columbia. August, 1981. 106
1982
67
164
An Expert System for Interpreting Speech Patterns Renato De Mario, Attilio GiordanaO,Pietro Lafaces, and Lorenza SaittaO 0 Istituto di Scienze dell'Informazione Universita di Torino torso Massimo d'Azeglio 42 10142 -TORINO Italy ABSTRACT Efficient syllabic hypothesization in con- tinuous speech has been so far an unsolved prob- lem. A novel solution based on the extraction of acoustic cues is proposed in this paper. This extraction is performed by parallel processes im- plementing an expert system represented by a gram- mar of frames. 1. INTRODUCTION Central to the organization of a Speech Und- erstanding System (SUS) are the representation of knowledge structured on several levels of ab- straction, and the control strategy that has to use the knowledge efficiently. This paper intro- duces a general framework for interpreting speech patterns and describes a set of rules which have been succesfully applied in a task-independent multi-speaker system for speech decoding. In principle, the system should be capable of accepting any sentence of any speaker in any lan- guage. For every analyzed sentence it produces a lattice of structured phonetic hypotheses. These hypotheses are obtained using relations between phonetic features and acoustic cues. In the present implementation the system con- tains a set of rules which have been tested ex- tensively giving good results for the Italian Language. Surprisingly, good results were also obtained with limited tests on the English and the Japanese Languages. The interesting aspect of the system is that the present set of rules can be considered as a kernel which can be enriched as new knowledge is acquired. Knowledge updating is presently performed by the designers, but it is hoped that some automatic learning will be introduced in thefuture. Particular care has been taken in selecting rules which use robust, easily detectable and possibly speaker-invariant acoustic cues. A frame language is proposed which describes a planning system for controlling rule application. 2. RELATION BETWEEN PHONETIC FEATURES AND ACOUSTIC CUES. A phoneme is represented by a set of phone- tic features. For example, the phoneme /g/ is §Istituto di Elettrotecnica Generale _ CENS Politecnico di Torino Corso Duca degli Abruzzi 24 10129 Torino Italy represented by the following set: /g/ = flnonsonorant-interrupted-consonant(NI)- , lax(L), palatal(P)" The phonetic features NI and L are related to acoustic cues by context-independent rules while P is involved in a relation in which also the context is taken into account. For the sake of brevity, the rules will be introduced with an example. The phonetic feature "palatal" is involved in a relation with the acoustic cues "compact- -burst-spectrum", "pseudo-loci" and "slopes of the second formant transition".The algebraic rela- tion between the phonetic feature and the acou- stic cues depends on wheter it is associated with the feature "tense" or "lax". In both cases the relation has the following general form: "palatal" = pl."pal-pseudo-loci"."pal-slopes" + P2. "compact-burst" + P3. "compact-burst" . "pal-pseudo-loci". "pal-slopes" ; Pl, p2, p3 are measures of the importance of the logical conjunction (indicated by a dot ) follow- ing them in the relation and + indicates logical disjunction. The acoustic cues "pal-pseudo-loci" and "pal-slopes" are defined by other relations invol- ving judgements expressed on parameters contained in the detailed descriptions of the acoustic cues. Let F2B and F3B be the pseudo-loci of the second and third formant before the plosive and let F2A, F3A be the pseudo-loci just after the consonant burst, "pal-pseudo-loci" is defined as follows, in conjunction with the feature "lax" in a single intervocalic nonsonorant consonant: "pal-pseudo-loci" = p4."high-pseudo-loci before" + p5."high-pseudo-loci after" + p6."high-pseudo-loci before" . "high-pseudo-loci after" ; where "high-pseudo-loci before" is a fuzzy set defined in the plane of the coordinates F2B, F3B and "high-pseudo-loci after" is another fuzzy set defined in the plane of the coordinates F2A, F3A. Analogously, "pal-slopes" is defined as follows: "pal-slopes" = p7."rising SB" + p8."falling SA" + P9 . "rising SB" . "falling SA" ; SB is the slope of the second formant transition 107 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. before the consonant and SA is the slope of the second formant transition after the burst. Context dependencies are limited to pseudo- syllabic segments. The detection of bounds of pseudo-syllabic segments is a side effect of a Semantic Syntax Directed Translation (see Tai and Fu[l] ) which generates primary phonetic hypothe- ses from acoustic cues. These hypotheses can be ambiguous. Primary phonetic hypotheses are: VF : front vowel VB : back vowel VC : central vowel SON : sonorant consonant SNCL : cluster of sonorant consonants NI : nonsonorant interrupted consonant NA : nonsonorant affricate consonant NC : nonsonorant continuant consonant . Primary phonetic hypotheses are used as a preliminary constraint for lexical access with some similarity with a recent work by Shipman and Zue [2]. For the sake of brevity, the problem of gene- rating lexical hypotheses won't be discussed in this paper. Nevertheless it is worth mentioning that lexical hypotheses may constrain the appli- cation of context-dependent relations at the syl- labic level involving places of articulation. The extraction of acoustic cues used by these rela- tions can be based on top-down predictions. 3. PARALLEL ALGORITHMS FOR GENERATING SYLLABIC HYPOTHESES The extraction of acoustic cues and the ap- plication of relations is performed under the control of an expert system. The procedural know- ledge as well as the structural one are integra- ted in a collection of frames described by a fra- me language. The frame language and the the structure of the expert system allow a certain degree of paral- lelism to be achieved in speech decoding.The sys- tem has been conceived in a framework of distribu- ted problem solving and has been simulated on a DEC VAX 11/780 computer. Parallel algorithms are conceived by decom- posing the task of hypothesis generation into a number of subtasks. Subtasks are accomplished by reasoning programs called experts. The main moti- vation for using a distributed model is that parallel execution of tasks can make the system work close to real-time facing ambiguous data and a very large variety of possible solutions. Each expert uses some knowledge written in a Long Term Memory (LTM) and may write hypotheses or results of intermediate computations into a Short Term Memory (STM). The extraction of reliable descriptions of the speech signal is a difficult task and is accomplished by a society of auditory experts. Fig. d - Expektd 0,j the auditoty docie-ty Fig. 1 shows the auditory experts. Actions of writing into and reading from Short- Term Memories are represented by dashed arrows. Message passing between experts is represented by arrows. When a message contains pointers to a STM, a link is established between the arrow representing a message passing and the arrow re- presenting the action of writing into the STM. The speech signal is sampled, quantized, sto- red into a "SIGNAL-STM" and transformed by an expert called "Auditory Expert for End-Points De- tection and Signal Transformation" (AEPDST). AEDPST looks for the starting point of a sentence by using a set of rules for end-points detection When this point has been detected, AEDPST starts transforming the signal in order to obtain a fre- quency-domain representation of it which is sto- red into the "SPECTRA-STM". Some gross spectral features (GSF) are computed from the spectra and stored into the "GSF-STM". The LTM of AEDPST, denoted LTMl contains rules for end-point detection and spectral trans- formation.After a long enough part of the signal has been transformed, a synchronization signal is sent to the Expert for the description of the time evolution of the total signal energy (TE- DESCRIPTOR). TE-DESCRIPTOR has the task of descri- bing the time evolution of the total energy of the signal (TE) in terms of peaks and valleys. At the same time, AEDPST goes on, transforms another portion of the signal and sends a message to TE-DESCRIPTOR. This operation is repeated until a sentence end-point is detected. The LTM of TE-DESCRIPTOR, denoted LTM2, con- tains a grammar GTEDES that controls a coding of TE in terms of peak and valleys, This grammar and its use were described in [3]. Descriptions of the signal energy (TE) are Sent to another expert called "GSF-DESCRIPTOR1l, which provides the acoustic cues for segmenting the sentence. These acoustic cues are sent to the "SYLLABIC-CUE-EXTRACTOR" which determines pseudo- -syllabic bounds and extracts, sometimes upon re- 108 Table I FRAME := (t NAME) (SLOT-LIST)) SLOT-LIST := ((SLOT) ((SLOT)) ) SLOT := ((NAME) [( DESCRIPTION )I) DESCRIPTION := ( described-as (NAME )) := (( CONNECTIVE) (DESCRIPTIO#') := (not{ DESCRIPTION) ) := (filled-by ( NAME)) := (CONDITIONAL) := (result-of ( PROC)) CONDITIONAL := (when{ NAME) (DESCRIPTION) (DESCRIPTION) [ (else( DESCRIPTION) ) 1) := (when< PREDICATE EXPRESSION) DESCRIPTION (else DESCRIPTION ) ) := (unless DESCRIPTION DESCRIPTION ) CONNECTIVE := or := and := xor PREDICATE EXPRESSION := PREDICATE := ( not PREDICATE ) := ( CONNECTIVE PREDICATE k)' ) PROC := F- function := P- procedure quest from a syllabic expert, detailed acoustic cues to be used for pseudo-syllable hypothesiza- tion. Syllabic hypothesization is performed by a SYLLABIC EXPERT (SE) which receives lexical expec- tations and an unambiguous description of acou- stic cues and sends syllabic hypotheses to the lexical level. These hypotheses are affected by degrees of plausibility. The organization of knowledge stored into the LTM of the GSF-DESCRIPTOR is introduced in the next Section. 4. INTEGRATION OF STRUCTURAL AND PROCEDURAL KNOWLEDGE IN THE LONG TERM MEMORIES OF THE AUDITORY EXPERTS. The LTM of GSF-DESCRIPTOR, denoted LTM3, con- tains an integration of the structural and proce- dural knowledge for obtaining a description of the gross spectral features represented by the time evolution of the following parameters: TE : the total energy of the signal, El2 : the energy in the 3 - 5 KHz frequency band, R12 : the ratio of the energies in the frequency bands Bl = 0.2 - 0.9 KHz, B2 = 5 - 10 KHz. The knowledge in LTM3 is a hierarchical net- work of plans represented by a grammar of frames. The network represents a control strategy accord- ing to which knowledge is applied for extracting acoustic cues from spectral information. A frame is an information structure made of a frame-name and a number of slots. A slot is the holder of information concerning a particular i tern called "slot-filler" (Minsky r4]). Slot- -fillers may be descriptions of events, re- lations or results of procedures. Attempts to fill the slots are made during a frame instantia- tion. A frame instantiation can be started by a simple reasoning program of an expert after hav- Table II The LTM of GSF-DESCRIPTOR (OSFDFR (INPUT result-of P-READ(PARAMETERS)) (FRSTR (or (when PK-PCINPUT) (filled-by PKTE)) (when DP-P(INPUT) (filled-by DPTE)))) (TERM(result-of P-APPEND(QUOUT)))) (PKTE ; peak of total eneru (INTPTE (result-of F-INTCINPUT))) (PEAKE12 (result-of F-DESCRPEAK(Fa,Fb,INTpTE (HR (result-of F-FHGRlZ(INTPTE))) (VINT (result-of F-CVINT(PEAKE12,HR))) (PCONT (unless (filled-by (or (V~CPEAK)(SONPEAK)(NSPFAK)(BRSTPEAK))) (described-as UPK(INTPTE))))) 1) ) (VOCPEAK ; Vocalic peak (WCONT (when (and (HDURPKTE-P)(HPR12P)) (fille'd-by (o~(VOCCUESET) ((LEFT~~W))(C~NSVOW)))))) (V~CCUESET (LOWR (result-of F-FLOWR(INTPTE))) (TRNINT (result-of F-TRNFCINTPTE))) (VWINT (result-of F-INTCVCINT)) (HGR (result-of F-CONSHR(INTPTE,VWINT))) (VCONT (filled-by (O~(VOW) (C~NSV~W)(V~WC~NS))))) ing received a message. After a frame is instan- tiated , a copy of its LTM structure is created into the STM. At the beginning all the slots in the STM are empty and the expert which created the instantiation attempts to fill the slots se- quentially. Frame structures are precisely defi- ned by the rules of a grammar defining all the acceptable composition of the attibute relations. Table 1 shows the rules of this frame-structure grammar. The exponent K>l of an expression means that the expression can be rewritten any number of times greater than 1. The asterisk means that the expression can be absent, present, or repea- ted any number of times. Brackets in Table 1 con- tain optional items which can be repeated any number of times. The frame-structure grammar defines a langua- ge for representing LTM knowledge. Table II contains a part of the description of the frames stored in the LTM of GSF-DESCRIPTOR. Predicates are indicated in capital letters by words ending with -P and are defined by semantic attachments which will be described informally. Functions are indicated by names starting with F-. Procedures are indicated by names starting with P-.Whenever the frame GSFDFR is instantiated a process for filling the frame slots is created along with a node in the output queue QUOUT. Whenever a description of a total energy peak is received by the GSF-DESCRIPTOR, an instan- tiation of the frame PKTE is created into the STM of GSF-DESCRIPTOR by the attempt of filling the slot FRSTR of GSFDFR. The execution of the cor- responding plan is then initiated. This process attempts to fill sequentially the slots of PKTE. Receiving dip descriptions causes the instantia- tion of a frame DPTE. PKTE and DPTE are complex 109 structures. Attempting to fill their slots causes the extraction of acoustic cues. INTPTE is filled by the result of the appli- cation of the function F-INT on the argument INPUT. This function gives the time of beginning, the time of ending and the duration ot the peak described in INPUT. INTPTE is written into the STM after PEAKTE in the instantiation of PKTE. The next slot of PKTE is filled by the result of the function F-DESCRPEAK which describes the peak of energy in the frequency band Fa-Fb and in the time interval written in INTPTE.Successively, the function F-FHGR12 fills the slot HR.It gives the description of the time intervals inside INTPTE in which the ratio R12 is high (greater than a threshold THl).The function F-CVINT compu- tes the time intervals inside the peaks in PEAKE12 in which R12 is high.The last plan of the sequence attempts to fill the slot PCONT. This slot can be filled by a disjunction of frame instantiations called VOCPEAK, SONPEAK, BRSTPEAK. Each invoked frame corresponds to a hierarchy of more detailed plans which are executed for at- tempting to fill the frame slots.If no frame instantiations can be completed, a default condi- tion is assumed consisting in filling PCONT with the description UPK(INTPTE). UPK is the descrip- tion of an uncertain peak detected in the time interval INTPTE. A similar network of plans is used for at- tempting to fill the slots of DPTE. The execution of more detailed plans for filling the slots of VOCPEAK is conditionned by the verification of the truth of the two predica- tes HDURPKTE-P and HPR12-P. HDURPKTE-P is true if the duration of the signal energy peak is high, HPR12-P is true if there is at least one peak of R12 in INTPTE whose maximum value is higher than a threshold TH2.VOCCUESET has slots which are filled by the extraction of acoustic cues which usually appear in a total energy peak containing at least one vowel. F-TRNF(INTPTE) extracts an interval at the beginning of a peak where cues of a consonantal transient, typical for example of plosive sounds, has been found. F-INT(VCINT) looks for the description of vocalic cues in VINT. The time interval in which these cues have been found fills the slot VWINT. F-FLOWR extracts the intervals in which R12 is low, F-CONSHR extra- cts the consonantal interval in which R12 is high. The default value for both the functions is zero. The predicate HGVINT-P is true when the maximum energy in the band Fa-Fb is high in the time interval VINT. AEQ-P(VWINT,INTPTE) is true when the two time intervals VWINT and INTPTE are almost coincident. If the above predicates are both true, the peak is described as a vocalic one and the description is VOC(VWINT). Comments after the colons in Table II help 5. CONCLUSIONS. A new model for representing and using the syllabic knowledge of a Speech Understanding Sys- tem in terms of acoustic cues has been introduced. Experiments on several hundreds of syllable uttered in continuous speech by four male and one female speakers gave the right interpretation with the highest evidence value in more than 90% of the cases. The results refer to syllables ex- tracted from spoken sentences of the every day language. More than hundred syllables were analy- zed for each talker. The system has been simulated with a general purpose program for parallel problem solving. Pre- liminary results show that with an average degree of parallelism of 12, syllabic hypothesization, excluding signal processing, can be done in real-time using standard multi-microprocessors architectures. 6. 1 REFERENCES. - Tai,J.W. and Fu,K.S.,"Semantic Syntax-Direc- ted Translation for Pictorial Pattern Recognition" ,Purdue University Report TR-EE 81-38, 1981. - Shipman,D.W. and Zue,V.W.,"Properties of Large Lexicons: Implication for Advanced Isolated-Word Recognition Systems",Proc. ICASSP-82,Paris 1982,pp.546-549. - De Mori, R.,"Computer Models of Speech Using Fuzzy Algorithms." New York: Plenum Press 1982. - Minsky,M., "A Framework for Representing Knowledge" In The Psychology of Computer Vi- sion, Winston,P. Ed.,McGraw Hill, 1975. 5 - De Mori,R.,Giordana,A.,Laface,P. and Sait- ta L., "Parallel Algorithms for Syllable Re- cognition in Continuous Speech", To appear on IEEE Transactions on Pattern Analysis and Machine Intelligence. 110
1982
68
165
PLANNING NATURAL-LANGUAGEUTTERANCES Douglas E. Appelt SRI International Menlo Park, Calijornia ABSTRACT This paper describes recent research on a natural-lan- guage-generation system that is based on planning. A sys- tem named KAMP is described that is capable of producing English sentences as part of a plan to enlist the cooperation of another agent in achieving a goal involving a change in the physical state of the world. The planner uses knowledge about the different subgoals to be achieved and linguis- tic rules about English to produce sentences that satisfy multple goals through the realization multiple illocution- ary acts. I. INTRODUCTION This paper describes a view of natural-language produc- tion similar to that of Allen [l], Cohen and Perrault [6], namely that speakers produce utterances with the inten- tion of satisfying particular goals, and that a hearer’s un- derstanding of an utterance depends on how he interprets the utterance as a component of what he believes is the speaker’s plan. A system named KAMP (for Knowledge And Modalities Planner) has been developed that plans natural-language utterances, starting with a high-level description of the speaker’s goals. The system can be viewed as an exten- sion of speech-act planning research by Cohen [S], but while Cohen was only concerned with producing abstract descriptions of speech acts, KAMP allows the planning to be extended to the level of the production of English sen- tences, integrating both physical and linguistic actions that satisfy discourse, knowledgestate and referring goals into utterances that achieve multiple goals simultaneously. This research has addressed the following three major problems: 0 Developing a domain-independent multiple-agent planning system called KAMP. 0 Extending the possible-worlds semantics repre- sentation of propositional attitudes developed by Moore [9] to handle mutual knowledge and want- ing. * This research was supported in part by the Office of Naval Research under contract N001480-C-0296 and in part by the National Science Foundation under grant MCS-8115105. 0 Capturing linguistic knowledge in the axioms, critics, and procedures used by KAMP to facilitate the planning of linguistic actions. Figure 1 Satisfying Multiple Goals with a Request II. WHY PLAN UTTERANCES? Figure 1 illustrates a typical situation arising when two people cooperate on a common task. The speaker plans an utterance that has multiple effects on the hearer. The speaker points to one of the tools on the table, saying “Use the wheelpuller to remove the flywheel.” The hearer, who is observing the speaker while he makes the request and knows that the speaker is pointing to the particular tool, thinks to himself, “Ah, so that’s a wheelpuller. I was wondering how I was going to get the flywheel off.” The speaker’s utterance in Figure 1 is syntactically very simple, but a surprising amount of complex reasoning is required for a speaker to produce such an utterance and know that it will have the intended effect on the hearer. The speaker obviously wants to request that the hearer do something. However, before he can make the request, he has to determine whether the hearer has enough knowledge to carry it out. If not, then the speaker has to know that the hearer can form a plan for acquiring the knowledge, or he must furnish the knowledge himself. In this example, the speaker informs the hearer that he should use the wheelpuller as part of the same utterance that he uses to 59 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. request the action he wants performed. The speaker uses the noun phrase “the wheelpuller” to refer to a particular tool. In Figure 1, it is evident from the hearer’s reaction that he did not know what a wheelpuller was. It is also clear that the speaker knew what the hearer did not know, because he performed a pointing action to clarify his int,ention to refer. Although the speaker knew that the hearer did not know what a wheelpuller was, he knew that the hearer would know after understanding the sentence. The utterance of Figure 1 also serves to inform the hearer that the object to which the speaker is pointing is a wheelpuller. For the speaker to make that inference, he had to know that the hearer would know that he did not intend the object to which he was pointing be the referent of the noun phrase “the flywheel.” He knows that because he knows the hearer understands that the flywheel is not a tool, and, therefore, cannot fill the instrument role of “remove.” Under different circumstances, the speaker could point to the flywheel, utter a sentence -identical to the one in Figure 1, and reason that the hearer would learn that the object to which he was pointing was the flywheel. r igure 2 The Organization of a Language Planning System III. THE KAMP LANGUAGE PLANNING SYSTEM It is clear from the above example that a model of language production that simply transforms a logical form to a surface utterance does not account for the way that people use utterances to satisfy multiple goals, because the model does not include the reasoning that the speaker and hearer perform to infer each other’s plans. Furthermore, utterances do more than alter the participants’ knowledge and wants. They influence the participants’ emotional attitudes and affect the state of the ongoing discourse. Utterances can be planned with the intention of acheiving goals along these dimensions as well. Therefore, instead of a simple transducer from logical form to surface English sentences, KAMP is organized like the planner in Figure 2. Appelt [2][3]discussed the organization of KAMP as a hierarchical planner similar in overall organization to Sa- cerdoti’s NOAH. bMP has two descriptions of actions at each level in the action hierarchy: (1) A full axiomitization in terms of possible worlds, and (2) a shorter, more intui- tive description called an action summary. KAMP uses the action summaries as a heuristic device to propose plans that it then verifies using the possible worlds axiomatiza- tion. The heuristic plan generation process is implemented by the NOAH-like hierarchical planner, and the verification process is implemented by a first-order-logic theorem prover. I lllocutionary Acts tnlOfmlng - Prarwng m--a -km I 1 I Concept Activation (PmPo8MonalAcw I 1 I Utterance Acts I Figure 3 A Hierarchy of Actions Related to Language Figure 3 illustrates the hierarchy of actions that is used by KAMP to plan linguistic actions. The central problem of building a language-planning system around KAMP is for- mulating the correct axioms, and incorporating the correct action summaries and critic procedures into KAMP that describe the actions of the hierarchy in Figure 3. IV. AXIOMATIZING KNOWLEDGE ABOUT INTENSIONAL CONCEPTS Axiomatizing the actions of Figure 3 requires the ability to specify the way in which performing actions affects the knowledge and wants of agents. Moore’s possible-worlds- semantics approach 191 solves this problem for knowledge and its relation to action. Describing Moore’s approach in detail is not possible here, but the central idea is to axiomatize the possible-worlds semantics of a modal ob- ject language in a first-order meta language. Thus, the semantics of a statement like Know(A, P) is represented as “P is true in every possible world consistent with what A knows.” It is necessary to reason about mutual knowledge (i.e., knowledge that A knows that B knows that A knows . , . ad infinitum) to plan referring expressions (see Clark & Marshall, [5]). KAMP reasons about the mutual knowledge shared by two agents by reasoning about what is true in the union of the sets of possible worlds consistent with 60 two agents’ knowledge. An “agent” called the kernel of A and B is defined, for whom the worlds consistent with his knowledge are precisely that union. This approach is a generalization of the “any fool” approach advocated by McCarthy et al. [7]. Wanting is represented in KAMP by a relation between an agent and a set of possible worlds called a situation. The situation is a set of possible alternatives to the cur- rent world which an agent wants. The situation that an agent wants can be characterized by different propositions according to what he knows. An agent is said to want P if there is some situation he wants such that P is true in every possible world that is a member of the situation, with the terms of P evaluated with respect to the agent’s knowledge. This representation allows a connection be- tween knowledge and wanting, which, while ignoring many of the subtle problems associated with wanting and in- tention, is adequate for solving many planning problems that arise in the task-oriented domains under considera- tion. This approach allows such reasoning as, for example, if John wants to meet the president of the United States, and if John knows that the president of the United States is Ronald Reagan, then John wants to meet Ronald Reagan. V. AXIOMATIZING LINGUISTIC ACTIONS As illustrated in Figure 3, the most abstract linguistic actions in KAMP's hierarchy are illocutionary acts. These are actions such as informing, requesting, promising, thank- ing, etc. that can be characterized as communicative acts independent of any particular linguistic realization. Speakers do not perform illocutionary acts directly, but rather perform them by means of surface speech acts. When a speaker plans a surface speech act, he selects the propositional content of the sentence he is going to ut- ter (which may be different from the propositional con- tent of the illocutionary act in the case of indirect speech acts), and selects a particular syntactic structure that is used for the realizing the illocutionary act. A fundamen- tal choice made at this level is whether to use an impera- tive, interrogative or delcarative sentence. Each surface speech act has a syntactic structure tree associated with it that evolves as the plan is expanded to include more con- stituents that are expanded to progressively lower levels. The relationship between illocutionary acts and surface speech acts is similar to the relationship between walking across the room and a sequence of muscle movements. One action is performed by performing the others. What distinguishes this relationship from that of walking and muscle movements is that the particular illocutionary act that is being performed depends on the hearer’s recognition of what the speaker is trying to do. A particular surface speech act, for example, “Can you reach the tool on the top shelf?” can in one case be a request to retrieve a tool, and in another case a request to inform the speaker of the ability to retrieve a tool. acts and surface speech acts. In this case several informing and requesting actions are being performed as part of a single surface speech act. KAMP has an axiomatization of each illocuationary act and surface speech act it knows about in terms of the pos- sible worlds approach outlined above. This paper will not describe the axioms in detail. The interested reader is refered to Appelt [2] f or more information. The general approach to axiomatizing illocutionary acts is to d&scribe only what Austin [4] refers to as illocutionary effects, not perlocutionary effects. In ot,her words, the effect of inform- ing a hearer that P is not that the hearer then believes P, but that the hearer knows that the speaker wants him to know that the speaker believes P. However, the speaker wanting t)he hearer to know that the speaker believes P is a reasonable precondition for the sincere performance of an informing action. Therefore, the effects of an illocution- ary act can be said to produce the mutual knowledge be- tween the speaker and the hearer that the act has been per- formed. All deductions about the change in the knowledge of the participants follow from knowing that the action has been performed and from their mutual knowledge of the conditions on the action’s performance. Surface speech acts include concept activation actions as part of their realization on the next lower level of abstrac- tion. Concept activation actions perform referring at a high enough level of abstraction so that they are not con- strained to be purely linguistic actions. When a concept activation action is expanded to a lower level of abstrac- tion, it can result in planning a noun phrase within the surface speech act of which the concept activation is a part, and physical actions such as pointing that also communi- cate the speaker’s intention to refer, and may be realized by a plan that includes either physical or linguistic actions. Although concept activation actions can be realized through physical actions, the planner must reason about their interaction with the linguistic actions being planned. Therefore, concept activation actions are expanded into two components, an intention-communication component communicating the speaker’s intention to refer, and a lin- guistic-realization component realizing the action in a sen- tence in accordance with the grammatical rules. The lowest level actions of Figure 3 are the utterance acts. Utterance acts consist of the utterance of particular sequences of words. The component of KAMP that produces utterance acts from a plan of hierarchical linguistic ac- tions and the constituent-structure trees associated with surface speech acts is quite simple because at this point, no modifications to the plan are made because decisions are completely determined by grammatical rules, and the speaker’s intentions do not influence the process. The final stage of planning consists primarily of making obligatory modifications required by the grammar, such as subject- verb agreement, proper auxilliary affixes, insertion of re- flexive pronouns, and the like. As the example illustrates, it is not even the case that there is a one-teone correspondence between illocutionary 61 VI. CONCLUSION The development of KAMP has been the first step tc+ ward a theory of planning natural-language utterances that allows the satisfaction of multiple goals in a single surface utterance, that plans utterances tailored to the specific knowledge of an intended hearer as well as the context of the discourse, and that provides for the integration of physical and linguistic actions. There are a number of areas in which the concepts developed in KAMP can be profitably applied and extended. One major area is the planning of extended discourse. Currently, KAMP plans only very simple dialogues. It may plan more than one utterance if it wants to perform several illocutionary acts and it cannot figure out a way in which one can subsume the others. The resulting dialogues will be coherent because the illocutionary acts are naturally tied together by being part of the same plan. However, to move beyond simple dialogues consisting of alternating one or two sentence turns, more complex, abstract discourse- level actions must be defined. McKeown [8] incorporates such strategies in a language generation system, and such actions need to be formalized in a planning framework for use by a system like KAMP. KAMP currently keeps track of discourse focus primarily so it can generate appropriate referring expressions. When planning an extended discourse, the planner would also be concerned about the speeaker’s need to inform the hearer of topic shifts. Topic shifting actions, similar to those described by Reichman [lo], must be formalized so they can be planned when appropriate. The primary focus of research on KAMP has been on planning natural-language utterances. However, KAMP is a general tool that can serve as the basis of multiple- agent planning systems in a variety of domains. There are many problems concerning planning to acquire knowledge, cooperation among several agents with limited resources, for which KAMP seems useful. VII. ACKNOWLEDGEMENTS The author is grateful to Barbara Grosz and Jane Rob- inson for comments on an earlier draft of this paper. REFERENCES [l] Allen, James and C. R. Perrault, Analyzing Intention in Utterances, Artificial Intelligence, vol. 15, no. 3, December 1980. [2] Appelt, Douglas E., A Planner lor Reasoning about Knowledge and Action, Proceedings of the First An- nual National Conference on Artificial Intelligence, 1980. [3] Appelt, Douglas E., Planning Nutural Language Utter- ances to Satisfy Multiple Goals, SRI International Artificial Intelligence Center Technical Note No. 259, 1982. PI PI PI PI PI 191 IlO1 WI Austin, J., How to do things with Words, J. 0. Urmson (ed.), Oxford U niversity Press, 1962. Clark, Herbert, and C. Marshall, Definite Reference and Mutual Knowledge, in Joshi et. al. (eds.), Ele- ments of Discourse Understanding, Cambridge University Press, Cambridge, 1981. Cohen, Philip and C. R. Perrault, Elements of a Plan Based Theory of Speech Acts, Cognitive Science, vol. 3, pp. 177-212, 1979. McCarthy, John, M. Sato, T. Hayashi, and S. Igarashi, On the Model Theory of Knowledge, Stanford Artifi- cial Intelligence Laboratory Memo AIM-312, 1978. McKeown, Kathleen R., Generating Relevant Explan- ations: Natural Language Responses to Questions about Database Structure, Proceedings of the First Annual National Conference on Artificial Intelligence, 1980. Moore, Robert C., Reasoning about Knowledge and Action, SRI International Technical Note No. 191, 1980. Reichman, Rachel, Conversationuf Coherency, Cog- nitive Science, Vol. 2, No. 4, 1978. Sacerdoti, Earl, A Structure for Plans and Be- havior, Elsevier North-Holland, Inc., Amsterdam, 1977. 62
1982
69
166
FRAME-BASED COMPUTER NETWORK MONITORING Lawrence A. Stabile Prime Computer, Inc. l 500 Old Connecticut Path Framingham, Massachusetts 01701 Abstract This paper describes an application of FRL to the monitoring of a large computer network. The design utilizes frame models for the network domain and a general graphics system, and a method for coupling the two domains by analogy creation and maintenance. We also explore the use of heuristics to control the selection of viewable descriptions, at their various abstraction levels and sizes. An ultimate goal of the project is to provide a convenient means for developers to create new graphic descriptions of the domain, without worry as to size of the description, and allow the heuristics to decide how and when the descriptions should be placed on the screen. Introduction We present the design of a frame-based system for monitoring and displaying the properties of a large computer network. We focus on (1) the development of a frame model for the computer network domain, including relevant information regarding performance, operational status, and topology, (2) the design of a frame model for graphic displays, a general constraint-based structure of hierarchical regions that may be manipulated by the user or the network domain; i!'t includes the notion of "intelligent pan and zootry", described further below, and (3) the use of analogy creation and maintenance to join tHese two domains to effect the desired interactive behavior and promote the modularity of state-based structures. This project uses a version of FRL [7,8], briefly described in a later section. This work bears similarity to the I-Space project of Rieger et. al. 161, and to various constraint-based simulation-oriented systems, such as Thinglab Cl], and Constraints 191. A distinction between our project and I-Space is our explicit attack on the problem of the size of graphic description: a terminal screen is of limited resolution; it represents an ultimate constraint in graphic-system design. The author sees this problem as central to the control of ---w-www * Present address: Apollo Computer, Inc., 19 Alpha Road, Chelmsford, Mass. 01824 large data sets; our solution is to use domain-independent and -dependent heuristics en a homomorphic correspondence between the application domain (here, computer networks) and the graphics domain. This manifests itself as a so-called generic B, which we describe below. Goals We desire to construct a system that permits the centralized observation of all activities on a large network of computers (in our case, a token-based ring network of approximately 50 Prime 750 systems). Typically, the user of such a system will be an operator or system administrator who needs to view an entire field of machines at once. The desired characteristics of the network monitor are: (1) The continuous monitoring and abstraction of important system events, such as faults, illegal attempts to access, and configuration changes. (2) The observation of system performance via abstractions such as Cartesian graphs, tables, and flow diagrams, to display individual and overall machine loads, available disk space, topology, and other properties. (3) The presentation of the above items to the user in an intelligent fashion on a (preferably color) graphics display. Design of the Network Monitor With the above goals in mind, it was natural to choose a system like FRL for its implementation. Its active nature (via attached procedures) provides us with the ability to make inferences, enforce constraints, and propagate information. This gives us the ability to use the representations as "physical things" that change appropriately when kicked or poked, just as real "things" should. In the following paragraphs, we will first desribe the overall structure of our system, the network domain model, then the graphics model, and last the analogical link constructed between the two. 327 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. Overall operation of the network monitor is as follows. We start with a large network of machines, in this case a token-based ring. Running on each machine is a process that periodically sends a package of raw data back to the master monitoring process. Here, the raw data is placed under the appropriate slots in the frame or frames that model that machine, or associated communication links (virtual circuits). Placement of this data trggers demons that compute desired features and abstractions, which in turn may trigger further computations. When sufficient data has been computed, demons that recognize this fact pass the data to the corresponding graphics frame via the analogical link (should one exist). Should the user be observing an updated piece of data, this update will immediately appear on the screen. The user may choose, via keyboard and/or tablet, to observe some other data or to probe a given datum in more detail. The graphics system will then be "connected", via analogy, to that data, and observation proceeds from there. The frame structure for the network domain starts with the object "ring-net", which contains as components sets of machines and communication links. These frames in turn have components, such as user sets and peripheral devices. In addition, the frames contain various features and demons to compute these features as data is added. As an example of operation of the network frames, consider some parameters of an individual machine. The raw data from the measurement processes contains cpu-time, io-time, and page-fault-rate. The addition of any of these to their slots in a machine frame causes computation of a general "load" figure. Similarly, other performance parameters, network topology, peripheral status, and other data, trigger demons appropriate to the desired abstractions. The demon-inheritance mechanisms of FRL are very important here; for instance, general topology changes are handled high in the ako-structure for the net, whereas more specific computations reside at the low level. The demons which may be triggered by such data-changes thus range from entire recomputation of the graphics analogy (e.g., a result of a topology change), to simple changes of one text string to another. The graphics frame system is a general-purpose graphics tool with which one can build several kinds of hierarchical structures. Regions are contained within regions down to a primitive level; these are the objects that directly drive the terminal at hand, and are typically lines or strings, although any program that directly drives the terminal may be interfaced to the system to act as a primitive object (e.g., a bar graph). The basic unit used to size and group the data is the rectangle; a "compute-size" demon calculates the enclosing rectangle of a set of rectangles, given their relative coordinates. Special demons for each kind of layout desired position the objects in the appropriate way 9 relative to the enclosing object. Size computations propagate from the bottom up, thus giving the layout-designer freedom from screen-size constraints. One can build objects from other objects, construct new layout-demons, or add to the set of primitives, without regard to size - choice of properly-sized objects is done heuristically, as described below. Current elements of the layout library include center-justified-stacks of rectangles, top-justified-strings (and variations on these), elliptical layouts, text- strings of various font, sizes, bar graphs, and individually pan/zoom-able (in the physical sense) Cartesian graphs. With the network domain model and the graphics model so described, our task now is to link them together. We do this by creating and maintaining an ana-lo3zv between the two structures. In our system, this method forms a fundamental engineering structuring technique: state-based systems may be built Jndenendent-ly, then coupled via the analogy mechanism. This is similar to using composition in a functional model, but here the state behavior of each system is naturally preserved and utilized. An analogy is simply a correspondence between the components of two frame structures such that changes in the components of one domain are naturally reflected in the other. A slot name is chosen for the correspondence, such as "full-graphic-description" or "summary-graphic-description". The correspondence is currently produced by demons in the source domain that instantiate the target frames and connect the analogical link. Recursive domains are supported simply by requiring an analogy-creating demon to construct a target frame using components that are analogies of the components of the source frame. Although many analogies are built as correspondences between individual frames, the truly interesting case occurs when we create a correspondence between an individual and a mer-ic structure. We thus have a homomorphism in which any object that may be instantiated in the target prototype validly represents the source object. The user, and the system-heuristics, may freely choose an object to instantiate depending on size constraints, desired level of abstraction, and other general or domain-specific information. In our monitoring system, the individual is the network model and the generic structure is the set of possible graphic descriptions of that model. For example, a ring-net may have a description which is graphically depicted as an elliptical layout, with small objects for each machine-node that may indicate nothing more than "up", "down", "loaded", or "unused". One could thus fit this summary of many machines on a single screen. A more detailed description might include bar graphs of performance of each machine, but one could then see only a small number of them at a time. The variety of descriptions continues in this way; what the user sees is determined by desire, size, detail, and importance. The resulting generic-graphics structure forms an AND/OR graph; any tree built from this graph is a valid description of the domain. To obtain a size-sorted set of descriptions for pan-zoom, we note that since size propagates upwards, we could simply enumerate the cross- product of all choices, size-sort them, and allow the user to choose a specific structure. This is, unfortunately, potentially explosive combinatorially. We thus use both graphics-domain and network-domain heuristics to determine the appropriate set of objects to build. Some of the heuristics are: (1) Allow the user (in the network domain) to attach intuitive information as to the relative size and importance of the graphic descriptions, (2) Use limited-depth search of the graphics AND/OR graph to approximate exhaustive enumeration, (3) Use general domain-properties, e.g., a table may be too large to fit on the screen and could be divided into "pages,', (4) Use the ,,uniform level hypothesis" that many domains are naturally structured as parts decompositions that are of the same kind at each level (e.g., in digital circuits, we go from block diagram -> gates -> transistors -> holes and electrons), (5) Be able to correct any erroneous choices made above by suitable backtracking. Implementation Status Currently, a small version of the network monitor runs on five machines. The network-domain frames receive data over X.25 virtual circuits; performance data is currently the only data gathered, The graphics system is complete, contains many primitive objects and layouts, and operates completely constraint-based. It has been tested on various small domains (including recursive ones) with success. Heuristics for pan and zoom currently consist of user-supplied size intuitions and a simple limited-depth size search. It is anticipated that the ultimate usefulness of the heuristics described earlier will not be ascertainable without using larger data sets. Backtracking is installed, and is currently user-controlled. FRL is implemented as a set of primitives inside our interpretive Lisp system. The performance of this system will have to be improved for truly usable results, although it is acceptable on the current small model. Our version of FRL is much like the original [7,8], but we have not implemented some of the more esoteric functions, and have added a few extensions: *if-added, *if-removed, etc., are properties attached to the frame--definition of slots. These are useful for bookkeeping kinds of slots that may have a variable number of occurrences in a frame (e.g., component-slots), for installing inverses of slots, and (eventually) for controlling analogies in a constraint-based way. Also, we have installed a simple queuing mechanism to suspend attached procedures (via closures) when their needed data is unavailable. Similar to the miters of Fikes C21, this is useful for non-constraint-based inferences, and for resolving "forward references" when loading data from a text file. The network communications facility is embedded in Lisp as a_ few simple functions to start processes and to send and receive messages. The underlying mechanism for this is a set of inter-process communication primitives (see [33 for details). For an overview of Lisp-based network communication techniques, see Model [4]. Problems and Directions for Future Research Localization of naming scopes within a FRL domain would be of great benefit, allowing independent designers to combine frame systems without name-conflict worry. We are considering extensions in this direction. In addition, a troublesome aspect of FRL is the lack of clear distinction between knowledge and meta-knowledge. This crops up in spots where we wish to build, say, frames whose constraints control the way in which other frames and their constraints are built. We also wish to apply our system to the control of networks, as well as their monitoring, and to utilize some of the ideas from expert systems and pattern-matching in the monitoring and control process, a goal shared by the I-Space project [63. Despite the problems mentioned above, the author has found the object-based method to be of great use in structuring data and their interactions, with particular freedom from sequential control. 329 Cl1 121 c31 141 [51 iI61 c71 183 [91 References Borning, A., "Thinglab - An Object-Oriented System for Building Simulations using Constraints", ICJAI-5, MIT, Cambridge, Mass., August, 1977. Fikes, R.E., "Odyssey: A Knowledge-Based Assistant", Artificial Intelligence, 16-3, July, 1981. Gertner, I., "A Report on Process Management in a Guest Distributed Operating System", Third Intl. Conf. on Distributed Computing Systems, October, 1982, Ft. Lauderdale, Florida. Model, M.L., "Multiprocessing via Intercommunicating Lisp Systems", Proc., 1980 Lisp Conference, Stanford, California. Rieger, C., and C. Stanfill, "Real-time Causal Monitors for Complex Physical Sites',, Proc., NCAI-1, Stanford, California, 1980. Rieger, C., R. Wood, and E. Allen, "Large Human-Machine Information Spaces", Proc., IJCAI-81, Vancouver, BC, 1981. Roberts, R.B., and I.P. Goldstein, "The FRL Primer,', MIT-AI Lab Memo 408, 1977. Roberts, R.B., and I.P. Goldstein, "The FRL Manual", MIT-AI Lab Memo 409, 1977. Sussman, G. J., and G.L. Steele, Jr., "Constraints-A Language for Expressing Almost-Hierarchical Descriptionsf,, Artificial Intelligence 14-1, August, 1980. [lOI Winston, P.H., "Learning and Reasoning by Analogy", CACM, 23-12, December, 1980. 330
1982
7
167
Competence in Knowledge Representation Ronald J. Brachman Hector J. Levesque Fairchild Laboratory for Artificial Intelligence Research Palo Alt,o, CA 5 1 Introduction The range of domains and tasks for “knowledge- based systems” has been expanding at a furious pace. As we move away from trivial domains, such as the “blocks world”, the demands on knowledge repre- sentation systems used by expert programs are becom- ing more extreme. For one thing, the domains them- selves are getting so complex that specialized technical vocabularies are unavoidable; consequently, the issue of a system talking with un expert in ki8 own ianguage cannot be ignored. For another, tasks such as medi- cal diagnosis, scene analysis, speech understanding, and game playing all have as a central feature an in- crementally evolving model representing probably in- complete knowledge of part of the task domain. In this paper, we explore some of the impact of these two critical issues-complexity and incompleteness- on knowledge representation systems. We review some aspects of current representation research that offer a foundation for coping with these problems, and finally suggest a way of integrating these ideas into a power- ful, practical knowledge representation paradigm. 52 Two Kinds of Adequacy A major concern when trying to represent an ex- pert’s knowledge about a sufficiently complex domain is the management of the technical uocebulory of that domain. In the blocks world, the technical vocabulary amounted to a handful of terms (like “block” or “co1or”) almost all of which were primitive. The ability of an expert in medicine or VLSI design, on the other hand, appears to depend in large part on a grasp of a non-trivial number of interrelated terms. In fact, a significant part of the job of becoming an expert is ae- quiring the technical jargon necessary to understand other experts. Consider, for example, some of the terms used by an expert in VLSI design. b an “enhancement mode transistor” is a kind of transistor with specific electrical proper ties. b a &pass transistor” is any transistor that plays a certain role in a larger circuit. b a &barrel shifter” is a structured configuration of components with a certain functionality. b “two-phase nonoverlapj&g clocking” is a method of organizing the timing in a circuit. In order to behave knowledgeably in a real domain, a system will have to interact with experts using spe- cialized terms like the above. Therefore, the applica- tion of knowledge represent’ation to expert problems demands of a representation system the ability to de- velop, uugment, and maintain this kind of technical vocabulary. As the above examples suggest, a repre- sentation scheme must allow the introduction of terms that deal with different aspects of the domain: ob- jects, properties, methods, rules, heuristics, and so on. Moreover, it must be possible to specify terms that are related to each other in several different and complex ways. A second aspect of expert tasks like medical diagnosis is the incremental rrcquOition of world knowledge. In the blocks world, a system could be given complete knowledge of the domain in that, for example, it could assume that the only blocks on a table were t,hose it had been told about. Knowledge about the world in a high level recognition task, on the other hand, will be acquired gradually and, at any given time, may be quite incomplete. Consider the knowledge acquired by a system at some stage of its processing. b a speech understanding system might know that a phoneme under consideration is a /b/ or a /v/, but not which. ) a scene analyzer might theorize that Borne object other than the pyramid is casting a 189 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. shadow on a box without having identified that object. b a medical diagnosis consultant may establish that the cause of the metabolic acidosis is not shock but not know what the cause is. A knowledge representation system for tasks like these must have the ability to express what is known about the world (i.e., to form and maintain a theory) however incomplete this knowledge may be. One important consequence of this involves the form of the repre- sentation itself: in the case of VLSI design, for ex- ample, it may not be possible to simply construct an analogue of a chip where representational objects stand for fragments of the chip and relationships be- tween these objects correspond to relationships be- tween the pieces of the chip. Incomplete knowledge about a domain (in the form of a theory) need not look at all like the domain itself. The requirements in dealing with realistic applica- tion domains amount to this: there are at least two distinct notions of adequacy that a knowledge repre- sentation system for expert tasks will have to address. Terminologicul udequacy involves the ability to form the appropriate kind of technical vocabulary and un- derstand the dependencies among the terms; udder- tionul adequacy involves the ability to form the kind of theory appropriate to the world knowledge of a sys- tem and understand the implications of the theory. In other words, terminological adequacy means getting the right kind of structured terms and assertional ade- quacy means putting the terms together to properly express what is known, however incomp1ete.l ~3 Aspects of Current Research One of the places to look for assertions1 ade- quacy is in the standard first order logical languages. These appear to be ideal for representing knowledge that is potentially very incomplete. For example, one can assert that there is something with a certain property without having to say what thing has that property (existential quantification), or that one of two sentences is true without saying which (disjunction). Indeed, one view of logic is that it is a formal account ‘The issue of defaults seems to involve both areas. The need for defaults is certainly based on the need for action given in- complete world knowledge. On the other hand, at least one interpretation of default8 is such that it is part of understand- ing what is involved with a term to know what the typical (and atypical) cases are. of what is implicit in statements where very little is explicit. But however assertionally adequate a first order language may be, it has the property that there are no constraints or interdependencies forced on the in- terpretation of the non-logical (predicate, function, or constant) symbols. In other words, the domain de- pendent terms in a classical first order language are all completely independent. One can certainly form a theory stating, for example, that every aenhancement mode tran&tor” is a “transistor”, but the language it- self cannot be made to enforce this dependency. Once it is agreed that the theories of a first order logic are intended to represent world knowledge, there is simply no place left to introduce terminology. With at least some uobject-centeredn languages (i.e., semantic net and frame approaches), the situa- tion seems completely reversed. To the extent that they allow theories to be formed at all, these tend to be very limited. In particular, the assumption that knowledge is based on explicit representations for all objects of interest rules out sentences that do not have specific individuals as the objects of predica- tion. The emphasis, rather, seems to be on facilities for structuring and organizing a set of terms (frames, concepts, units, classes). There might be various ways of specializing existing terms, of aggregating mul- tiple terms into single structures, of delineating the roles played by the components of a structure and of separating the essential from the merely prototypical in the use of a term. This is done quite independently of any theory about the world. For example, calling any transistor that plays a certain role in an inverter a “@I down trszzsistor” is not merely asserting that, as a matter of fact, one set of electrical components is the same as another (as would be the case with a universally quantified biconditional); rather, it in- volves defining a term so that it is analytically true that any component playing that role is a pull down transistor. $4 A Unifying Approach Given the almost independent concerns of the two (perhaps caricatured) representation methodo- logies just mentioned, a natural organization for a knowledge representation system suggests itself: com- bine both representation paradigms into a single, unified framework. Moreover, the unification we are considering is not simply an unstructured amalgama- tion of two languages (nor is it simply maintaining 190 an alternate, predicate calculus form of representation for a semantic net). The object-centered part will be responsible for organizing in a convenient way the relevant domain dependent terms being used; the logi- cal part, on the other hand, will use these terms to form theories of the application domain. Each of the components can have its own purpose, methodology, and standards of adequacy. Since neither is required to deal with issues for which it was not designed, its advantages can be enjoyed and its limitations mini- mized. The elements of each of the two components fol- low straightforwardly from our separation of concerns. The assertional component consists of a theory:2 a set of sentences in a logical language (e.g., that of first order predicate logic). The terminological component is a set of terms composed from other terms using a small set of term-composition operators (e.g., those of KL-ONE [l], [S]). These terms then become the non- logical (predicate and function) symbols within the assertion al component. The fact that the terms are definitionally interrelated means that the non-logical symbols of the assertional component stand in various analytical relationships to each other. For example, we might have within the ter- minological component a KL-ONE Concept called “transistor” with a Role %onnection” differentiated into a KSource,” a “drain,” and a “gate.” We can then define a term called “puZI down tram&&or” as (YEtGeneric transistor source GZVD) meaning a transistor whose source connection is GND. VRGeneric is a term-forming operator that allows a Generic Concept to be defined by value restricting a Role (in this case, source) of a superconcept (here, tramhtor). As with any other KL-ONE Concept, we can ask for the superconcepts or the Roles of pull down transistor. But in addition, within the assertional component, pull down transistor becomes a unary predicate that can be used to make various assertions. Looking at a design, for example, we might want to say that 3x3y(Pul~DowraTrasa8d8bor(x) A TransdeCor(y) A Terminal(drain(x), y)), in other words, that there is a pull down transistor whose drain is a terminal of some (yet to be identified) 2We have chosen to deal with a single theory within the asser- tional component at this stage. This is a pragmatic choice and others would be possible within our methodology. transistor. The key property of this predicate is that any theory using it would automatically contain an additional postulate pertaining to the meaning of the term: kf x(Pu~~Down~ronsdator(x) E (Traneiator(x) A Bource(x) = gnd)) which guarantees (by virtue of the rules of logical im- plication) that every pull down transistor is a transis- tor and that every pull down transistor is connected to GND. In general, associated with each term-forming operator is an axiom schema stating what the impact of the term in the assertional component should be. In general, for a definition of a Concept B of the form 6 +-- (VRGeneric Q 6 $1, we would have a meaning poatuhte of the form which is instantiated for different values of $, [ and # (as above for pull down trsnsistor).3 Another Concept forming operator under inves- tigation is NRGeneric which allows a Concept to be defined by number restricthg a Role of an existing Generic Concept. The terminological component also includes an operator PrimGeneric for forming primi- tive terms, especially useful for representing “natural kinds” [2]. For instance, we could define transistor by transistor t (PrimGeneric (NRGeneric device connection 3 3)). That is, a transistor is, among other things, a device with at least three and at most three connections.4 The keystone of our approach is the specification of all sentences that are logically implied by the intro- duction of both Concepts and Roles in the terminologi- cal component. A similar treatment can even be ap- plied to “default” operators. The major difference is that the meaning postulate must refer to the cur- rent state of knowledge. For example, instead of hav- ing “all puZZups are depletion mode”, the postulat4e might imply that “all pullups are depletion mode ex- cept those known to be enhancement mode.5 3This is a bit of a simplification since the form of the meaning postulate actually depends on the form of e and #. For ex- ample, Roles correspond to either binary predicates or unary functions depending on their cardinality. 4PrfmGeneric is one of the term-forming operators whose meaning postulate is not a quantified biconditional. 5See [3] for a formal account of sentences such as these. 191 55 A Note on Competence Our feeling is that a knowledge base is not simply a passive repository of assertions about a domain; rather, it is best viewed as actively providing a cer- tain service to the knowledge-based system in which it is embedded. The knowledge base can answer ques- tions for the system based on its theory of the applica- tion domain. Moreover, it is the responsibility of the knowledge base to revise that theory as knowledge is accumulated. The competence of a knowledge base can there- fore be defined functionally in terms of two opera- tions: TELL takes a knowledge base and an asser- tion and produces a new knowledge base; ASK takes a knowledge base and a question and produces a suitable answer. In fact, the service provided by a knowledge base is precisely its behavior under these two opera- tions. Following the discipline of abstract data types, the rest of the system can only tell the knowledge base about its world (or define terms) and ask about that world (or terms). It cannot manipulate the contents of the knowledge base directly and has no information about the storage schemes used. The actual definition of competence in the asser- tional component is based on logical implication: what the system “knows” (i.e., the answers it provides to questions) includes what follows from what it has been told. Similarly competence in the terminological com- ponent depends on closure under subsumption (i.e., the system “knows” when one term conceptually con- tains another). In addition, because TELL can refer to the current state of knowledge (as in the “default” ex- ample above), it cannot be defined simply as adding an assertion to the knowledge base; like ASK, it requires deductive capabilities. Moreover, ASK can also refer to the state of knowledge to allow the system to find out where the knowledge base is incomplete (see [3] for more on this). So while implication and subsumption obviously play an important role, the competence of the system is ultimately defined in terms of the more general TELL and ASK operations. $6 Conclusion We have presented a view of knowledge repre- sentation that distinguishes between terminological and assertional competence, and advocates under- standing a terminological component in terms of its impact on an assertional one. We see a number of advantages arising from this segmentation of the knowledge representation task. First of all, it al- lows the effort to be cleanly separated into two dis- tinct subtasks with a well defined interface between them. For example, the terminological component can be extended in any number of ways provided the meaning postulates implied by the new constructs are always specified. Similarly, the expressive power of the assertional component can be refined without side-effects to the terminological operators. Also, the choice of data structures and algorithms can be made separately for each component and can be sensitive to the special needs of the component without ad- versely affecting what is done in the other. For ex- ample, a KL-ONE style classifier [4] can be used to implement terminological subsumption while still al- lowing a more general theorem prover to be used in the assertional component. But most importantly, by dividing the knowledge base into the two components, we can best answer the needs of knowledge-based ex- pert capabilities by preserving the terminological fea- tures of object-centered languages without sacrificing the assertional expressive power of the logical ones. Acknowledgements Many of these ideas-especially technical refinements to the terminological framework-were developed in concert with Dan Bobrow, Richard Fikes, Austin Henderson, and Mark Stefik. Special thanks also to David Israel for many fruitful discussions about the issues raised in this paper. Ill PI PI 141 PI References Brachman, R. J., Bobrow, R. J., Cohen, P. R., Klovstad, J. W., Webber, B. L., and Woods, W. A. Retearch on Knowoledge Representation for Natural Language Understanding, Annual Report, BBN Report No. 4274, Bolt Beranek and Newman Inc., Cambridge, MA, 1979. Israel, D. J. “On Interpreting Network Formalisms.” To appear in The Interndionai Jour- nal of Compuferr and Mathematicr, Special Issue on Computational Linguistics. Forthcoming. Levesque, H. J. A Formal lleatrnent of Incomplete Knowledge BmeR. Technical Report No. 3, Fairchild Laboratory for Artificial Intelligence Research, Palo Alto, CA, 1982. Lipkis, T. “A KL-ONE Classifier.” In [5]. Schmolze, J. G., and Brachman, R. J., eds., Proceedings of the 1 QBl KL-o NE Workshop. Tech- nical Report No. 4, Fairchild Laboratory for Artificial Intelligence Research, Palo Alto, CA, 1982. (Also, BBN Report No. 4842.)
1982
70
168
Knowlcdgc Ileprcscntation Languages and Predicate Calculus: I low to I-law Your Clrkc and Kat It Too Charles Kich ‘I’hc hrtifici;d lntclligcrlcc I Amatory RlilSS~ChUSCtlS luslitutc Of ‘I’cchnology .\hstract This paper attempts to rcsolvc some of the controversy bcrwccn advocates of prcdicatc calcu1us and users of other knowledge rcprcscntation languages by demonstrating that it is possible to have the key fcaturcs of both in a hybrid system. An examp!c is given of a rcccntly implcmcntcd hybrid system in which a spccia1i;lcd planning 1anguage CO- exists with its translation into prcdicatc calculus. In this rystcm. various kinds of reasoning rcquircd for a program understanding task arc implcmcntcd at cithcr the prcdicatc calculus level or the planning laugungc lcvcl, dcpcnding on which is more natural. Introduction The ideas in this paper arise out of my cxpcricncc in the Programmer’s Apprcnticc project [5.8] over the past scvcral years dcvcloping a langungc for rcprcscnting knowlcdgc about programs and programming. I’hcrcforc, 1 will begin by briefly recounting the history of the knowlcdgc representation part of our project. Rcadcrs who have worked cm otbcr projects to build intclligcnt knowlcdgc based systems will undoubtedly find many aspects of our story quite familiar; this leads mc to bclicvc that the novel step WC have rcccntly taken may be widely applicable. ‘I‘hc story begins in 1975 when WC first pcrccivcd the need for a rcprcscntation. diffcrcnt from the literal text of a program, which dcscribcd the logical structure of a program at various lcvcls of abstraction. l‘hc application system WC wcrc designing (the Programmer’s Apprcnticc) would L~SC this representation in various ways to analyze, synthcsizc, modify, verify and explain programs. Having no prcconccivcd rommitmcnt to a particular formalism, we looked to our own intuitions and informal notations on blackboards and scratch pads for an appropriate framework. What we came up with was a language of boxes and arrows. with boxes inside of boxes and scvcral difficrcnt kinds of boxes and arrows. The details of this language don’t matter hcrc (it is described in detail clscwhcrc [6,7]) - what is important is that its structure originated in our intuitions about what we wcrc rcprcscnting and the reasoning WC intcndcd to perform. This report describes rcscarch done at the Artificial Intelligence Laboratory of the h4assacbusctts lnstitutc of ‘I‘cchnology. Support for the laboratory’s artificial intclligcncc rcscarch has been pro\ idcd in part by the Advanced Research Projects r\gcncy of the Dcpartmcnt of Dcfcnsc under Office of Naval Rcscarch contract NOOOI+SO.C-0505, and in part by National Science Foundation grants klCS-7912179 and hKS-8117633. The language. which WC called plarl diagram. was then implcmcntcd in a straightforward way and used for various kinds of reasoning tasks over the next four or five years. For example, Shrobc [9.10] wrote a module which verified the consistency of plan diagrams; Waters [11,12] wrote a module which recognized the occurrcncc of certain common patterns in plan diagrams. During this period, we took a fairly ad hoc’ approach to the semantics of our knowlcdgc rcprcscntation. This is not to say that WC didn’t know what plan diagrams meant. but just that ultimately the meaning of the rcprcscntntion was implicit in the proccdurcs WC were writing to manipulate it (some have called this an “operational” semantics). In 1979, bcforc embarking on a major cxtcnsion and re- imp1cmcntation of parts of the Programmer’s Apprcnticc, 1 undertook to dcfinc a formal semantics for plan diagrams. ‘I’hc approach I chose for this task was to translate plan diagrams into another language which already had a well-dcfincd formal scmnntics. namely a version Of prcdicatc calculus. ‘I’hc major hcncfit I cxpcctcd to gain from this effort was to clarify some grcy arcas in the meaning of plan diagrams. AS a corollary, I also expcctcd to USC the formal semantics as a kind of specification against which our manipulations of plan diagrams could bc validated.’ 1 did not howcvcr cxpcct the prcdicatc caiculu~ to show up in any direct way in the system implcmcntation. It turns out that 1 was wrong! The predicate calculus formalization was direcfly usable as rllc basis for a practical itupleuwnlatiorr in which predicate calculus and plan diagratm co-exist Plan 1Xagrams In order to apprcciatc how this state of affairs came about, WC now need to dcscribc plan diagrams in more detail. Figure 1 shows the rcprcscntation in the language of p.lan diagrams of how to implement a push operation on a stack implcmcntcd as an array with an index pointing to the top clcmcnt. 1. Please note that I do not intend any negative connotation in the use of the term “ad hoc” here, but just the dictionary definition: “conccrncd with a particular end or purpose.” 2. Though this is not the major point of this paper, I claim that this sort of translation into predicate calculus is almost always worth attempting. Morcovcr. many other people have made this observation [2.13]. so it should no longer be controversial. 193 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. This sort of diagram is called an owr/~~. An overlay is made up of two plans (one on the right side and one on the left side) and a set of corrcspondcnccs (hooked lines) between them. Each plan is a hierarchical structure with named, typed parts, called roles. (There are also construitm bctwccn the roles of a plan, but since thcsc are already written in a logical language, their translation to predicate calculus is not part of what is interesting here.) In the overlay of Figure 1, the plan on the right side rcprcsents the operation type Push, whose roles arc its inputs, named Old (of type stack) and Input (any type). and its output, named New (of type stack). The plan on the left side of the oicrlay, called Hump-and-update, dcscribcs the pattern of operations and data structures which implements Push. This plan has four top-level roles: I3ump (operation type Addl); Update (operation type Arraystorc); and Old and New (composite data type Indexed-array). Each of thcsc roles has sub-roles by virtue of being composite types: the sub-roles of Bump and Update arc the input and output roles of the corresponding operation types; the sub-roles of Old and New arc named ISasc (type array) and Index (type integer). Arrows in plan diagrams, such as bctwccn the Output of the Bump of Bump-and-update and the Input of the Update, indicate the flow of data. Finally, the hooked corrcspondcnce lines in an overlay indicate how to map between the implcmcntation description on the left and the more abstract description on the right. For cxamplc. the hooked line bctwccn the Input of the Update of Bump-and-update and the !nput of Push indicates that the clcmcnt stored in the array in the implcmcntation view corresponds to the object which is pushed onto the stack in the more abstract view. Gcncrally speaking. translating from plan diagrams to predicate calculus amounts to translating from a richly structured language, i.e. one with many different primitive constructs and methods of combining them, to a much simpler language: the version of predicate FIGURE 1. r -- ------_- 1 old:indexed-array i bump: add1 input:any update: arraystore new: indexed-array I .- ----- -1 calcul~rs I USC provides functions and relations as primitive notions, with boolean conncctivcs. quantification and equality as combining methods. Thus one cffcct of the translation process is to make explicit certain semantic commonalitics bctwccn different plan diagram constructs. For cxamplc, both overlay names and role names become function symbols in the predicate calculus: both data flow arrows and overlay corrcspondcnccs become logical cqualitics. The surprising result (though in retrospect it seems obvious) is that these semantic commonalitics can also be exploited for an elegant implcmcntati&. A I lyhrid hplcnmtation The implcmcntation that cnsucd is a hybrid system in which the predicate calculus translation of a plan diagram co-exists with an explicit rcprcscntation of its box and arrow structure. Furthermore, both lcvcls of language arc used for the various kinds of reasoning that need to take place for the application domain. For example, the following is part of the prcdicatc calculus translation of the left side of Figure 1 (plans become prcdicatcs on a domain of n-tuplcs).3 bump-and-update(a) * [ output(bump(a)) = index(update(a)) A index(old(cu)) = input(bump(a)) A base(old(tr)) = old(update(a)) A indcx(new(tr)) = output(bump(a)) A base(new(n))=output(update(a))] ‘l’his formula is stored and manipulated in a prcdicatc calculus utility pitck;lgc 141 which provides certain simple forms of reasoning. ‘1’11~ basic idcn of the hybrid implementation is to USC the fiicilitics of the prcdicatc calculus packilgc for those types of reasoning which, after the translation, arc easily cxprcssiblc in the language of predicate calculus. For example, one kind of rcnsoning that the Programmer’s Apprcnticc needs to do is to propngntc preconditions and postconditions (part of the constraints of operation types) along data flow arrows to XC if they arc consistent. Given thilt the inputs and outputs of operations bccomc function terms like output(bump(a)) above, and given that data flow arrows become cqualitics bctwccn such tcrrns. this kind of propagation is implcmcntcd by substitution of equals in the predicate calculus. For example. if the postcondition of Add1 is as shown below, add1 (a) 3 [ output(a) = input(a) + 1 ] then one can conclude by this reasoning that the Index of the New indcxcd-array is one more than the lndcx of the Old indexed-array. Furthcrmorc, the prcdicatc calculus rcdsoning utility package which we USC performs deductions based on substitution of equals very cfficicntly. (It also performs these deductions incrcmcntally with retraction, which allows us to easily cxplorc the cffccts of adding and removing data flow arrows.) 3. This presentation supprcsccs details of the formalization having to dl, with mutable data strucrurcs. See !6.7] for a more complete treatment. BUMP-AND-UPDATE PUSH BIJNP-AND-UPDATE-AS-PUSH 194 Another attractive feature of performing some deductions in the predicate calculus translation is that diffcrcnt plan diagram constructs which arc translated into similar predicate calculus forms can share the implcmcntation of some of the reasoning that is rcquircd with them. For cxamplc, as mentioned earlier, both data flow arrows and overlay corrcspondcnccs bccomc cqualitics. ‘I’hc following is part of the prcdicatc calculus translation of the overlay in Figure 1 (overlays bccomc fimctions from n-tuplcs to n-tuplcs). p = bump-and-update-as-push(a) * [ old(P) = indexed-array-as-stack(old(cr)) A input(@) = input(update(n)) A new@) = indexed-array-as-stack(new(ar)) ] Given this foimali;ration. substitution of cqunls also scrvcs to map information bctwecn corresponding plans in an overlay. For example, anything that is asscrtcd to bc true of the Input of the Update of a 13ump-and-updntc plan automatically bccomcs true of the Input of the corresponding Push. and vice versa. WC have also implcmcntcd a gcncral mechanism in the predicate calculus utility package which makes deductions based on the domain and range of functions. Given that both roles and overlays bccomc functions, this single mechanism takes CarC of both enforcing type restrictions on the roles of a plan and asserting the appropriate abstract plan type corresponding to an implcmcntation plan in an overlay. For cxamplc, asserting that the bump function has domain bump-and- update and range add1 causes the following deduction to occur. bump-and-update(a) =+ addl(bump(a)) Similarly, asserting that the bump-and-update-as-push fimction has domain bump-and-update and range push causes the following deduction to occur by the same mechanism. bump-and-update(a) * push(bump-and-update-as-push(a)) Other kinds of reasoning used in the Programmer’s Apprcnticc are flor easy to express as direct manipulations on prcdicatc calculus fonnulac. l’hcrcforc we explicitly store the original plan diagram as extra-logical annotation of the prcdl-, ‘We calculus translation. For cxamplc, as far as the predicate calculus reasoning mechanisms are concerned, both bump and bump-and-update-as-push arc just function symbols. However some reasoning proccdurcs need 10 treat thcsc two symbols very diffcrcntly because one names a role and the other an overlay. ‘The bcsr way to think of these proccdurcs is as operating on plan diagrams. For cxamplc. a powerful method of program analysis used by the Programmer’s Apprcntlcc is analysis by inspection. The key step in analysis by inspection is to recognize familiar patterns in a program. This recognition is achieved in the I’rogrammcr’s Apprentice by translating a program into a plan diagram and then trying to match it against a library of standard plan diagrams. ‘I’his matching algorithm [l] is most naturally written in the langungc of plan diagrams, not in terms of the predicate calculus formulae directly.” What I mean 4. Note howcvcr, that once a match has been found, it is rccordcd in the the predicate calculus utility package as an assertion of the plan type prcdicatc. Thus from point of view of the predicate calculus utility package, the marching algorithm is a dcril cd implication. hcrc by “natural” is: first. I doubt whcthcr one could c\cr discover the right matching algorithm by thinking in ccrms of the prcdicatc calculus formulae: and second. although it is thcorctically possible to implement a plan diagram marching algorithm c~pcrating directly on the prcdicatc calculus fOrmulac, this approach does not lead to a convcnicnt or an cflicicnt implementation. Similarly, an important part of program synrhcsis reasoning in the Programmer’s Apprcnticc rcquircs breadth-first traversal of a program’s plan diagram (c.g. the tree shown in Figure 2). looking for the next implclnc~lti~tioll decision to make. This proccdurc is also most naturally written in the language of plan diagrams. Discussion Ijcforc going any further it is crucial to undcrstnnd that this paper is about pragmatic rather than philosophical issues in knowledge rcprcscntation. What WC have dcvclopcd via our cxpcricncc with plan diagrams and predicatr calculus is a ncvl methodology for implcmcnting intclligcnt knowlcdgc based systems. ‘I’his methodology has significant bearing on how to structure the ncccssary reasoning proccsscs in such systems; 5 it dots not however have anything to say about the important questions of the meaning and cxprcssivc pdwer of knowlcdgc rcprcscntntions (since WC arc only talking about knowledge rcprcscntations which can bc Iranslated into prcdicntc calculus). We can summar& the methodology as follows: Use a hybrid system in which an ad hoc knowlcdgc representation language co-exists with a prcdicatc calculus translation. At the predicate calculus level, provide as much reasoning power as can bc naturally cxprcsscd and cfticiently implcmcntcd in the language of predicate calculus. At the higher 1~~1. implement those reasoning proccdurcs which naturally exploit t.hc structure of the ad hoc knowlcdgc rcprescntation language. i-?nally. provide explicit connections bctwccn the two lc\cls so that C~CII~~CS at OIV.Z ICVC~ can bc incrementally transliltcd to the other. 5. It would bc nice if MC had prccisc ways of talking about the structure of reasoning procccscs (and hopefully M’C c\,cntunlly will). For the mcantimc. how~vcr. the rcadcr will have 10 be satisfied by examples and weak arguments about “naturalness.” 195 This hybrid approach rcsolvcs some of the controversy [3] bctwccn advocates of prcdicatc calculus and the users of other knowledge rcprcscntation languages by demonstrating that you can have the key fcaturcs of both. A typical argument made by advocates of predicate calculus is that a given knowlcdgc rcprcscntntion language is not intcrcsting bccausc it can bc translated into (“is a notational variant of’) prcdicntc calculus. ‘l-his argument misses the important practical issue, nnmcly what happens if you actually try to USC the predicate calculus translation to implcmcnt the task for which the origin,11 knowledge rcprcscntation 1mg~1qy was dcsigncd. What lqy-ms is that, in order to write cffcctivc rcasonino b proccdurcs. you end up reinventing csrcntinlly the same knowlcdgc rcprcscntation language as an ad hoc set of conventions and annolations on top of the prcdicatc calculus. On the other hand, ad hoc knowlcd_rc rcprcscntation languages by thcmsclvcs typically arc not dcsigncd to facilitate the small, simple deductions. such as implications and substitutions. which arc often nccdcd to mcuiate bctwcen their associated special purpose reasoning procedures. ‘l’hcsc small. simp!c deductions arc just the kind of deductions for which existing predicate calculus machinery is very cffcctive. Finally, it is interesting to note that one can arrive at this methodology from cithcr of two directions. In our cxpcricncc, we started with an ad hoc language and reasoning proccdurcs and then added the prcdicatc calculus lcvcl for semantic clarity, I could also imagine starting with predicate calculus as the language of expression and then dcbcloping a higher level language as the set of conventions and annotations required to write effective reasoning procedures. In either case, the important conclusion is that both lcvcls of language are useful for building prhctical systems. Acknowlcdgcments Many of the ideas in this paper were devclopcd in collaboration with Dan Brotsky. Also, WC would probably never have tried to do things Rcfcrcnccs 111 PI [31 I-11 [51 07 171 PI [91 [lOI D11 WI 1131 D. Isrotsky, “Program Understanding Through Cliche Recognition”, (M.S. Proposal), Ml’l’/Al/WP-224, Dcccmbcr, 1981. R. Fikcs and G. llcndrix. “A Network-Ihscd Knowledge Rcprcscntation and its Natural Deduction System”, Proc. of 5th lrlt. Joinl C’or$ 011 Artjficial l~lrclligcncc, Cambridge, Massachusetts. August 1977. pp. 235-246. t-‘. 1 In! cs, “Iii ikf’cncc of 1 .ogic”. hoc. c?f’Slh 1~1. Joirr/ col$ OII Ar~ificid Iutdligcm-e. Cambridge, hlassachusctts, August 1977, pp. 559-565. 1)./I. hlcAllcstcr. “Reasoning Utility Package User’s Manual”, M I.I’/,I I hi-667. April, 1982. C. Rich, I I.E. Shrobc. and R.C. Waters, “An Ovcrvicw of the Pro_crainnicr’s Apprcnticc”. Pm-. of 6/h Int. Jvilll Couf: on Ar/$ciul Iurt~lligcnre, ‘I’okyo. Japan. August, 1979. C. Rich, “lnspcction Methods in Programming”, R/l I’l‘/h l/‘i’R-003, (Ph.D. thesis), Ikcmbcr, 1980. C. Kick. “A Formal Rcprcscntation for Plans in the Progr,ltnmcr’s Apprcnticc”, I’roc. of 7111 IHI. Joirlr Corf. on Arrifirial Ir~rclligcrm. Vmcouvcr, Canada, August, 1981. C. Rich and R.C. Waters. “Abstraction. Inspection and Debugging in Programming”, MI’l’/AtM-634, June, 1981. H.1’. Shrohc, “l’xplicit Control of Rcnsoning in the Programmer’s Apprentice”, I’ruc. of 4111 lru. C’onf: on Aulorlialccl Llcrhltcriot~, February, 1979. H.E. Shrobc. “Dcpcndcncy Dircctcd Reasoning for Complex Program Understanding”, (Ph.D. Thesis), Ml?‘/AI/‘I’R-503, April, 1979. R.C. Waters, “Automatic Analysis of the Logical Structure of Programs”, Ml?‘/Al/‘l’R-492, (Ph.D. Thesis), December, 1978. R.C. Waters, “A Method for Analyzing Loop Programs”, IL’L’L Tram. on So$ware I:ilg., Vol. SE-5, No. 3. May 1979, pp. 237-247. W. Woods. “What’s in a l-ink”, Representafion and Udcrs~anding, Academic Press, 1975. this way if not for David McAlIestcr’s Reasoning Utility Package. 196
1982
71
169
AlWRET* A SYSTEM FOR REASONING ABOUT TIME Marc B. Vilain Bolt Beranek and Newman Cambridge MA 02238 A ILGIC CF TIME In this paper we describe the salient features of a new system for reasoning about time. The system represents time primarily -- though not exclusively -- in terms of intervals, and perfom deductions on this representat ion. It has a mechanism for maintaining consistency in the representat ion and discovering the origin of inconsistencies. Further, its deduct ion mechanisms for intervals are easily extended to deal with time points. Finally, it embodies a sirrple and elegant schgne for reasoning about absolute dates. BY hE4vs m I-ION Imagine a world in which I were not a ca-rputer scientist, but an inveterate explorer, a world traveller. In this world I would have been many places, spending at one time several years in Africa before going on to explore the Peruvian Andes. Imagine also that for sane time during n-y African stay, I contracted a case of ber i-beri. Hunan beings will naturally deduce that ny being ill with beri-beri cana before ny being in Peru. This deduction is typical of the kind of reasoning about time that we have tried to capture in a carputer system in current development at BH% The user of our system rrakes assert ions about the interrelations of events in time. The system in turn deduces new information about the events’ interrelations, and makes this informat ion available to the user’s queries. In this paper we will describe the salient features of our system. In particular, we will look at the t-rain representation scheme we have chosen for time (we view time primarily in terms of intervals), and will show how deductions about time can be autanated with this representation. We wi 11 briefly discuss how our system n-aintains its representation internally consistent. Finally, we will describe how our deduction mchanisms can be gracefully extended to deal with time points and absolute dates. *This research was supported by the Advanced Research Projects Agency of the Department of Defense under Contract No. NOOO14-77-C-0378 There are several ways in which hmn beings understand time (for exarrple as points, intervals, or with respect to calendar dates). In our system, we have chosen to represent tirre pr imar i ly -- though not exclusively -- in terms of intervals. In so doing have followed the suggestions of Jamas Allen 2 Tl that intervals are the roost ca-rputationally natural way of representing time. Relations between time intervals are described in our system by “operators” in a lo ic. B 23; This logic is an extension of that given in at its core it is carposed of 13 relational ir irni tives and a large body of inference rules. The primitives describe unarrbiguously each of the possible uays that two intervals can be related (they can be equa 1, overlap, one can precede the other, and so forth). The precise meaning of these primitives is most intuitively cannunicated by a drawing, and we wi 11 hence only give their definitions here in a graphic form (see Figure 1). A B -- A before B 3 after A IA A d_uring_ B m B contains A L A begins B .B B begun-by A ,A, A ends B ‘B B ended-by A - A overlaps B 7 B overlapped-bg A A 1 A equals B IB B equals A Figure 1: Primitive relations between intervals 197 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. The relational primitives can be joined into I relational vectors; -a relational vector describes a ccn-posite relation between two time intervals. For exG-ple A (IMIINS BEGINS OVERIAPS) B asserts that interval A is either strictly contained in B (D-RIM;), is contained in B but co- starting with it (BEGINS), or overlaps the “left edge” of B (OVERLAPS). See Figure 2. The smntics of relational vectors is one of exclusive disjunction. That i s , exact 1 y one and on 1 y one of the primitive ccn-ponents of the vector precisely describes the relation of the intervals linked by the vector. Hence, a vector consisting of only one primitive exactly describes the relation between two interval s, whereas the vector cqosed of all 13 primitives we interpret as the zero-vector. Asserting that two intervals are related by the zero-vector means that one in fact knows nothing about how they actually relate. Figure 2: The relation A (DRIK BEGINS OVERLAPS) B We mentioned above that our logic has as part of its core a body of inference rules. These rules are used to ccn-bine known assert ions and deduce new information. They have the following form. “If interval A is related to interval B by Rl (1) and B is related to interval C by R2 then A is related to C by R3” Rl and R2 are relational primitives and R3 is a vector. The following three rules (i 1 lustrated by Figure 3) are typical exvles. A mAINS B and B ENTAINS C => A KKNTAINs) c (2) A CSXIAINS B and B BEos\J-BY C => A ~XAINS) c (3) A CXrJTAINS B and B OvERLAppED-BY C (4) => A (~XI-AINS m-Fp( 0 VmAPPDBY) c In our system the rules are used to define the ccn-position properties of the primitive relations of the logic; there is thus one cqosition rule for each pair of primitive relations (169 rules in total ). The rules can be extended in a straightforward way to deal with cases where intervals are related by vectors constructed of A I-1 A contains B BW B contains C CH A 1-1 A contains B BW B begun-by C CH A- A contains B BH B overtapped-bg C p.$2$L, more than one primitive relation. Consider fomula 1 above. Say Rl is actually a vector V = (v, . . . vm) and R2 is the vector U = (ul . . . u,). Then R3 is ccmputed by ccn-bining (disjunctively) the vectors deduced fro-n the curposition rules for the pairs of primitives Vi and uj (for each cvonent Vi of V and each ccn-ponent u- of U). This process I preserves the disjunctive semantics of vectors. Figure 3: Illustrations of rules l-3 re For exmp 16 say A is related to B, and B is lated to c as in these twp assertions: A (PAINS) B B @N-I-AINS --BY 0 VERIAPPED-BY) c. To ccn-pute A’s relation to C, we ccrrbine the deductions rrade by the three rules above, and obtain the following result. A (CXXI-AINS --BY OVEUAPP~BY) c USINS -IHE IJXIC Our systen endeavors to maintain a “ccnplete picture” of all the interrelations of all the time intervals the user has declared to exist. That is, for each pair of intervals declared by the user, the system will keep track of the vector that most accurately describes their interrelation. sm of these relation vectors will have been asserted by the user, others mst be deduced frcm the user *s original assertions. These deductions are performed in a process of constraint propagation which is guided by the basic curposi tion rules of the time logic. As we saw above, if IR know that A relates to B by Rl, and B to C by R2, then we can 198 constrain A’s relation to C by the curposition rule for RI and R2. If C is also known to relate to D by R3, then we can constrain A’s relation to D by co-r-posing R3 with the carposition of Rl and R2, and so forth. To maintain the aforementioned carplete picture, the system applies the transitive closure of the ccrrposi t ion rules to al 1 the relations between al 1 the declared time intervals. Grrputing the transitive closure of an operation is a well understood process, but generally requires a clear mathematical fomralization of the operation. We have been able to forrrulate the ccn-posi tion rules of our logic in this way. Indeed, it is possible to state the carposition rules in tenm of a n-u1 tip1 ication and an addition over relation vectors. These operations, along with the appropriate identity elements, define an algebraic structure over relation vectors that is very close to being a semiring. The reserblance to a semiring is sufficiently good that we can compute the transitive closure operation using a modification of a polyncmial time algoritlm initially designed for closed semirings. The original algor i tlm, attributed among others to Kleene, is given in 1 11 and operates in n3 time and r? space, where n about which assert ions hge the ntier been made. of intervals A GLIMPSE OF asNSIS’IEXY MI- To deduce the interrelation of two intervals, our system carbines information derived frcm all the assertions that the user has ever made. In so doing, it may discover that sane of the user’s assert ions are in fact rrutually contradictory. Contradictions can arise in any nurber of ways; typical exsrples include in-possible B-/AFTER chains, such as A (BE.FU?E) B, B (BERXE) C, and C (BEKRE) A. An inportant generalization about contradictions in this dcmain is that they are always due to a set of assertions as a whole. One can not meaningfully single out one assertion frcm a contradictory set as being the principal cause of the inconsistency. Our system handles contradictions by a technique inspired by the truth maintenance system of Jon Doyle [5] and the time specialist of Kahn and Gerry 161 l Whenever a new set of assertions is added by the user, the transitive closure operation is reccrrputed. During this operation, the system monitors part ial carputat ions to discover contradictions. If one is found, the system interrupts the transitive closure process and backtraces through its ccfrputations. As it backtraces, the system r-rakes use of information it recorded during the deduction process, and isolates the exact set of rmtually inconsistent assertions that led to the contradiction. This set is returned to the user. In part to acknowledge our sources of inspiration, we have naned the process by which our system handles contradictions consistency maintenance. TIM? FONTS At the onset of this paper we noted that intervals are not the only mechanisn by which human beings understand time; another can-non construct is that of time points. Time points are naturally defined bytkxndaries of intervals and by certain dating schemes (which we describe below). In fact, much of the earlier 1 i terature on reason i ng about time describes ccrrputer systems whose primary representation of time was in terms of points, not intervals. This is the case with the -S system of Bruce [4] and the time specialist of Kahn and Gorry. Our system handles time points in rruch the sane way that it handles intervals: points are objects whose interrelations can be described by primitives in a logic. The logic of points is arrived at by expanding the earlier logic of intervals. To the older logic we add new primitive relations (which like the old ones can be built into vectors), and new carposi t ion rules over these primitives (which can be **con j o i ned” to deal with vectors). The new primitives can be broken into three groups: (1) those which relate points to other points, (2) those which relate intervals to points, and (3) those which relate points to intervals. As before, we prefer to define these new relations graphically (see Figure 4). Pl l p2 Pl 0 l beforee P2 P2 l aftero PI Pl 32 PI *equals0 P2 P2 l equaZs* PI I I 'p l P P I *before I after, P P abegins I I begun-bye P P l durinq I I contains@ P P rends I I ended-by* P P I *after I P before* Figure 4: hkw primitive relations (involving points) The curposi tion rules that we add to the logic not only define the carposition of the new primitive relations with themselves, but also with 199 the original relations that applied to intervals only. Again, we present scme typical examples of these rules (illustrated by Figure 5). A EEKRE’ Pl and Pl ‘EERRE’ P2 => A (BEKRE’ ) P2 (5) AEEI?XE*PandP’BEKREB => A (BEKRE) B (6) A EEINS B and B CfVTAINS’ P +A (PAINS m-BY* ~~~) P (7) Pl P2 ,A, a 0 A beforea PI PI #be ore* P.2 A before. P P #before B A begins B B contains@ P Figure 5: Illustrations of rules 5-7 The mechanisn by which our sys tern makes deductions about points is just an extension of that which it uses to mke deductions about intervals. As with intervals, the user can declare the existence of certain time points and assert their interrelations to other points or to intervals. Just as before, the system rraintains a “curplete picture** of al 1 these objects’ interrelations by means of a transitive closure oper at i on. The operation is sinply performed over the expanded set of carposition rules in the newer logic. As a final note about points, we should state that including them along with intervals in the da-rain of our system only minimally carplicates the deduction algorithm. The polynunial ccrrplexi ty results and the consistency maintenance renain unaffected. ABDLIJ’IE WTIIG There are two dating mechanisms that are ccmnonly used by people. The first dates entire intervals, and its best example is the standard calendar (which gives a unique nane to intervals of an entire day). The second assigns **time starps” of sorts to particular rn3nents or points in time. This kind of dating is exaTlified by the reference to **Y:OO o’clock** in the sentence “Bi 11 will arrive by 9:00 o’clock”. The time stzrrps assigned by this method of dating are what we call absolute dates. We have incorporated a method for reasoning about absolute dates into our system. Our system handles statements about absolute dates by mapping them into the logic of intervals and points. Once this mapping is carpleted, the original statements involving absolute dates need in fact never be consul ted again. Mre specifically, whenever the user makes an assert ion relating an interval (or point) to a date, the system autanatical ly generates a time point to correspond to the date. This generated time point (which we call a date point) is then appropriately related to the inGal point) in the user’s assertion. The new date point rmst also be related to all other known date points; the system performs this autanatically by sinply adding a few new statements to its store of assertions. This process is performed under the guidance of a sin-ple calendar function. Once the system has generated these (internal) assertions, it can use them to deduce new information by the very sime constraint propagation process that operates over intervals and points. It never again need consult the user’s original statements relating dates to intervals or points. This is an appealing result since it obviates the need to maintain separate rrechanisns for dealing with dated and undated information. This dual reasoning was typically present in earlier time systems, such as that of Kahn and Gorry. Finally, we should note that the assertions our system generates when creating a date point have the sane ccnputational status as the user’s undated assertions. This insures that dated assertions will fall under the scrutiny of consistency maintenance just as undated ones do. The system wi 11 d i s cover that da ted assertions are inconsistent with each other (or with undated information) in exactly the sane way that it discovers inconsistencies between undated assertions. OTHER DIRECTICNS What we have described in the preceding pages are fi tmly established features of our system. At the time of writing, most of these features have been implemented. Before closing this paper though, we would 1 ike to mention very briefly sane of the hew directions in which we are extending our work. We are concentrating our efforts primarily in two areas. The first of these is reasoning about absolute duration. We would like to give our system the ability to make deductions based on information about the length of intervals. The second is a search for methods for 1 imi ting curputation. The polynunial carplexity of our deduction algorittms is good, but not ideal: we would like to have an elegant mechanism to limit the anount of carputation involved in the deduction process. We have tentative solutions in both of these areas, and will report upon in a for thcaning docunent [S] . them mDre fully In parting, we wuld 1 ike to place our wrk in a broader perspective. Recently, several writers have described general models of time and act ion (specifically Janes Allen [3] and Drew McDemrott [7] 1. Our efforts are nowhere nearly as arbitious as theirs. Instead we have sought to construct a basic ca-rputational tool that could be used by larger prograrrs. (In our ok research, for instance, w intend to use our time systen as part of a plan recognizer for natural 1 anguage understanding.) ar appr oath is actually consistent with that of Nldkrmott and that of Allen. In fact, both of these authors have assuned in their models the existence of underlying time maintenance rt-odules similar to the one described here. tir goals in this research have all along been to provide a sirrple but ccffplete inference mechanism over the time darain, one that ule hoped wuld free researchers in AI f run having to tackle the low-level details of reasoning about time. We are hoping that our system will permit thgn to turn their attention to more rewarding investigations in problem solving, language understanding, and other intelligent behavior. This work would not have been possible without the intellectual stimulation and support of rmny people. A-nong thm are James Allen, Jim Schrolze, Candy Si dner , and Bill Woods. To them and others, all ny thanks. [I 1 [I 2 [I 3 [1 4 [3 5 H Q [I 7 [I 8 Aho, Alfred V., John E. Hopcrof t, and Jeffrey D. Ullman. The Design and Analysis of muter Algoritz - Addison-Wesley, Reading, Mass., 1974. Allen, Janes F. bintaining Knowledge about Temporal Intervals. Technical Report 86, University of Rochester, Dept. of muter Science, January, 1981. Allen, James F. A General Model of Action and Time. Technical Rqt-?7,University Rochester, Dept. of muter Science, Septder, 1981. Bruce, Bertran C. AModel for Tarporal References and Its Application in a Question Answering Program. Artificial Intelligence 3(1):1-25, 1972. Doyle, Jon. Truth Maintenance Systems for Problem - - - Solving. Technical Report 419, bssachusetts Institute of Technology, Artificial Intelligence Laboratory, January, 1978. Kahn, Kenneth, and G. Anthony Carry. Mechanizing Temporal Knowledge. Artificial Intelligence 9(1):87-108, 1977. McDerrnot t, Drew. A Temporal Logic for Reasoning about Processes and Plans. Research Repoz196,ale University, Dept. of Carputer Science, &rch, 1981. Vilain, Marc B. An Inference Mechanisn for Reasoning about Time. Forthccming research report, Bolt, Beranek, and Newran Inc. 201
1982
72
170
CIRCUMSCRIPTIVE IGNORANCE Kurt Konolige Artificial Intelligence Center SRI International ABSTRACT In formal systems that reason about knowledge, inferring that an agent actually does not know a particular fact can be problematic. Collins [l] has shown that there are many different modes of reasoning that a subject can use to show that he is ignorant of something; some of these, for example, involve the subject reasoning about the limitations of his own information-gathering and memory abilities. This paper will consider a single type of inference about ignorance, which we call circumscriptive ignorance. We present a partial formalization of circumscriptive ignorance and apply it to the Wise Man Puzzle.’ 1. Circumscriptive Ignorance The premise that there is a limited amount of information, resources, or strategies available for the solution of a problem is often an unstated but essential part of the problem abstraction. For example, in the Missionaries and Cannibals Puzzle, it is important that only a single boat is available to ferry people across the river; one cannot invoke a helicopter brigade from the Sudan to solve the puzzle. McCarthy [5] has investigated the first-order formalization of problem statements such as this, using a circumscription schema to capture unstated limitations on resources. In puzzles that involve reasoning about the knowledge agents possess, there are often unstated conditions on the initial information given an agent, as well as on the information he can acquire. In the Wise Man Puzzle (see Section 3 below for a full statement of this puzzle), it is common knowledge that each man can see his neighbors’ spots and knows from the king that there is at least one white spot. It is an unstated condition of the puzzle that this is the only knowledge that wise men have about the initial situation; in an adequate formalization of the puzzle it should be possible to prove that each wise man is ignorant of the color of his own spot in the initial situation. In effect, the knowledge that is available to the agents in ‘This short note describes current research collected in 141. The work presented here was supported by grant N0014-80-C-0296 from the Office of Naval Research. the puzzle is being circumscribed; informally one would say “The only facts that agent S knows about proposition p are F.” If from the facts F it is not possible for S to infer p ( or -p), then S does not know whether p is true. Proving ignorance based on a limitation on the knowledge available to an agent will be called reasoning by circumscriptive ignorance. Circumscriptive ignorance, especially for the Wise Man Puzzle, has been formalized in first-order axiomatizations of a possible-world semantics for knowledge (Goad [a], McCarthy [6], and Sato [7]). However, there has been no formalization in a modal logic of knowledge. The advantages of using a modal formalization are clarity of expression and the ability to use inference procedures, including decision procedures, that have been developed for modal logic. In the next few sections we outline a modal formalization of circumscriptive ignorance. 2. The Modal Logic K14 The modal logic we shall use is a propositional modal logic based on Sato’s K4 [7], which includes an axiomatization of common knowledge.2 K4 is a family of languages parameterized by the choice of propositional letters Pr and agents Sp. 0 E Sp is a reserved name for Fool, a fictitious agent whose knowledge is common knowledge. For a particular choice of Pr and Sp, the language K4 is the propositional calculus over Pr, together with a set of indexed unary modal operators [S], S E Sp. The intended meaning of [S](Y is that agent S knows a. The axiom schemata for K4 are (Al) All propositional tautologies (A4 [SIC-~ WV [Sla 3 PI w (1) (A4 [wm~wlaw~ (A5) Pl~xolcsl~ 7 where Q and ,f3 denote arbitrary sentences, and S denotes an arbitrary agent. Axioms Al-A4 give the system S4 2For simplici t y we use K4 rather than Sato’s more complicated KT4, which deals explicitly with time. 202 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. for each modality [S], while A5 is the common knowledge axiom: what any fool knows, any fool knows everyone knows. The two rules of inference are modus ponens and necessitation (from (Y, infer [Slay). In K4 and other modal logics of knowledge, an agent’s knowledge is described as a theory, that is, as a set of formulas that contain th .e axioms and are closed u nder the rules of inference. To see this, note that all instances of [S](r for which (Y is an axiom of K4 are provable, and that modus ponens is implemented by A4. We shall use the term agent’s theory to mean the set of formulas Q for which [S]cy can be proven in K4. There is a difference between the axiomatization of an agent’s theory and K4 itself. K4 is a Hilbert system in which no proper axioms are allowed; an agent’s theory allows proper axioms (i.e., we can assert formulas of the form [S]P).~ We define the a-theory of K4, for a fixed sentence cr, as the set of formulas p for which cr>/!? is a theorem of K4. We write ~+$9 if p is in the a-theory of iY4, i.e., cy>/?’ is a theorem of K4. h-4 bY itself 1s not sufficient to represent circumscriptive ignorance, since there is no way of limiting the proper axioms that could conceivably be used to derive knowledge in an agent’s theory. If we look at a particular th eory for an agent, where the only proper axiom is cr, it is impossible to derive proofs of certain formulas within that theory; but there is also no way to express this in K4 itself. bY To express circumscriptive ignorance, K4 is extended a family of new unary modal operators indexed by sentences of K4. These are called circumscriptive modalities, and are written as [cy], where Q is a sentence of K4. The extended language is called K14. In informal terms, [a] is intended to mean the a-theory of K4, that is, [cr]/3 holds just in case p is in the a-theory of K4. Thus the notion of provability is explicitly introduced into K14. The axiomatization of [cr] is problematic, since it involves formalizing not only which sentences of K4 are provable, but also which sentences are not. However, a sufficient set of axioms for KI4 can be obtained by making use of the intended interpretation of [CL] as provability in K4. The axioms of K14 are simply those of K4, together with the schemata W) [4P, where +$ (A7) -[4A where Q ,&.P . (2) The rules of inference for KI4 are the same as K4. The axioms (2) only form a recursive set if K4 is decidable; further, it is obvious that if K4 is decidable, 3This difference becomes apparent in the rules of inference: necessita- tion is not included as a rule of inference in an agent’s theory, since it would allow the derivation of [O]p from [S]p. then so is K14. Sato [7] gives a proof that K4 is decidable, and decision procedures with reasonable computational properties can be found by analogy with those given by Kripke [3] for S4. These procedures are detailed in [A]. An alternative characterization of A6 that is more in the style of typical modal language axiomatization could be given as follows: (A8) [cr]p, /3 an instance of Al-A5 W) bla w4 bl(P~7D@lPx47) (All) blP~b][S]P . (3) A8 and A9 ensure that all instances of the logical axioms of the a-theory of K4 are present; A10 and All are modus ponens and necessitation, respectively, for K4. However, there is no similar characterization of A7. We will use the schemata A6 and A7 directly in the remainder of this paper. The circumscriptive quality of [LL] comes from the restriction of its meaning to the a-theory of K4. If we take cr to be a knowledge operator, this translates into a circumsciption of an agent’s theory. For example, if a = [Sl% [aI P' k lc s out the agent’s theory for which q is the only proper axiom. More complicated statements are possible; for example, to say that the only knowledge S has about p is that he knows either q1 or 42, assert [a][S]p, with cr = [S]q1v[S]q2. NY14 has several interesting properties related to its circumscriptive nature. For every atom of the form [a]@, either it or its negation is provable in KI4. Every subset of the language K14 whose atoms are all circumscription operators is complete: it has no consistent proper extension. Thus KI4 (with circumscription atoms only) has a single model, given by the theorems of K4. 3. The Wise Man Puzzle The Wise Man Puzzle can be stated as follows [6]: A king wishing to know which of his three wise men is the wisest, paints white dots on each of their foreheads, tells them that at least one spot is white, and asks each to determine the color of his own spot. After a while the wisest announces that his spot is white, reasoning as follows: “Suppose my spot were black. The second wisest of us would then see a black and a white and would reason that if his spot were black, the least wise would see two black spots and would conclude that his spot is white on the basis of the king’s assurance. He would have announced it by now, so my spot must be white.” We simplify this puzzle by having the king ask each wise man in turn what color his spot is, starting with the least wise. 203 To formalize the puzzle, we use the language K14 with Sp = {O,S1,&,S3} and Pr = {pl,p2,p3}. pi is the sentence asserting that Si has a white spot on his forehead. A handy abbreviation is [Sl]pk [S]pv[S]-p, i.e., S knows whether or not p is true. Then the following axioms suffice for the initial conditions of the puzzle (without worrying about time): ciyl) pl Ap2Ap3 (w2) [o](PlvP2vP3) (4 where & = W2AW3A[Sl]pzA[Sl]ps. IV2 says that it’s common knowledge that at least one spot is white; W3 says that everyone knows that each can see the spots of the others. W4 is the circumscription axiom: it says that S1 knows whether pl holds solely on the basis of common knowledge (W2 and W3) and his own observations. In this initial situation, it is possible to prove, through the use of W4, that S1 does not know the color of his own spot. To see if [~][S,l]p~ or its negation is provable in K14, it suffices by A6 and A7 to apply the decision procedure for K4 to the sentence a> [Sll]pl. The decision procedure looks for a K4-model of the negation of this sentence, and indeed finds one. Hence a >[rS1npl is not provable in K4, and -[a][S,jjp, is a theorem of K14 by A6. This in turn implies, by W4, that -[lS1npl, i.e., S1 does not know the color of his own spot. The second situation is similar, except that S2 has heard Sl’s reply to the king that he does not know his spot’s color. The axioms for this situation are WI-W3, along with (~6) bw2nP2-us2nP2, (5) where Q = ~2/\~3A~5A[s,]p,A[s2]p3. Again, by reasoning in the decidable theory K4, it is possible to show that -((Y > fS2jp2) has a model; thus S2 does not know the color of his spot. In the final situation, S3 knows that S2 does not know the color of his spot after hearing Sl’s reply, and so S3 is able to deduce that his own spot is white. The axioms here are WlLW3 and a = ~2A~3A~7A~8A[S3]pl A[S3]p2. 4. Conclusion This brief note has introduced the idea of circumscriptive ignorance, showing how it could be formalized in a modal logic of knowledge called K14. It then becomes possible to infer nonknowledge in the Wise Man Puzzle within this logic. The utility of K14 is not limited to this puzzle, however; it should be possible within this logic to do general reasoning about the state of knowledge needed by an agent to derive various facts. An interesting modification of K14 occurs if the sentences in the circumscriptive operator are taken from K14 itself, ra.ther than K4. In this way agents could reason about the limitations of the state of knowledge of other agents. However, it is no longer obvious that K14 would be decidable under this modification; further work needs to be done on this problem. References [l] Collins, A., “Fragments of a Theory of Human Plaus- ible Reasoning,” in Proceedings of Theoretical Issues in Natural Language Processing , Nash-Webber, B. and Schank, R. (eds.), Cambridge, Massachusetts (June 1975). [a] Goad, C., “A Formal Representation for Situations Involving Knowledge,” unpublished note, Stanford Uni- versity, Stanford, California (December 1976). [3] Hughes, G. E. and Cresswell, M. J., An Introduction to Modal Logic, Methuen and Company Ltd., London, 1968. [4] Konolige, Ii., “Modal Logics for Belief,” unpublished notes (February 1982). [5] McCarthy, J., “Circumscription-A Form of Non- Monotonic Reasoning,” Artificial Intelligence 13 (1980). [B] McCarthy, J. et al., “On the Model Theory of Know- ledge,” A!emo AIM-312 , Stanford University, Stanford (1978). [7] Sato, M., .4 Study of Kripke-type Models for Some Modal Logics by Gentzen’s Sequential Method, Research Institute for h4athematical Sciences, Kyoto University, Kyoto, Japan, July 1976. It is possible to prove in K4 that (~>[&]ps, and so W9 simply asserts that S3 knows the color of his spot. 204
1982
73
171
Modeling Motion Hith Qualitative Process Theory Kenneth II. Forbus ‘I’hc Artificial I ntclligcncc I .aboratory Massachusetts Institute of ‘I’cshnology 545 ‘I’cchnology Square Cambridge, Mass. 02139 USA Ahstrilct Rcprcsenting motion is an important part of Naive Physics. Previous qualitative models of mr\tlon wwc ccntcrcd around the idea of Qualitative States. ‘I his paper discusses Xl alternative rcprcscntation in terms of Q)ualitilti\‘C Process I‘hcor’y. Viewing motion 3~ a process has scvcral advantages, notably the ability to make more dctailcd infcrcnccs about dynamics and the ability to comhinc process descriptions to model more cornplcx systems. After examining the relationship bctwccn Qualitative State and QP theory rcprcscnts[ions of motion. thy\ utility of the QP rcprcscntations are illustrated by analyzing an oscillator. 1. Introduction Rcprcccnting motion is an important part of Naive Physics [I ia\cs. 197X1]. l’rc~iou~ qu,llitatlvc models of motion wcrc based the idea of Quaiitittivc St,ctcs AI)~ qtialltativc “simulation rules” to rcprcscnt changes of stale (ldekiccr, 1975][1~o1-bus. 19Xlaj). This piper cxaminch an altrrnatc wit) of rcprcscnting motion based on Quahtativc Process ‘I hcory[ f :orbus. 1% 1 b][ Forbus, 1982a] and ci~inp,ircs it \\ith Qu<llitati\c State rcprcscntations. ‘l‘hc power of the C)u,llitativc I’roccss theory (VI’) dcscnptions is illustrated by an illl;llg~i> of a simple oscill&or to dctcrminc the cxistcncc of a limit chclc. ‘I‘hc example is drawn from [Forbus. 198233, which contains more details. 2. ()U:llit:lti\c St;ltc i~~pWSCllti~tiOfl ‘I hc Qualitati\c State rcprcscntation is based on the notion nf stAC in cla~cnl mechanics. Certain parts of the Cl~lSSiCill state arc rcprescnted abst t acLly (tvpically position is rcprc\cntcd by a piccc of :;p:~cc. and velocity 1~~ ;I symbolic heading) and the type of activity, uhich cl,lssicall! is imp!icit in the choice of descriptive equations, is ma& crxpiict. C)u,llitativc skltcs arc linked by qualitative simulation rul?, :hnt Ktiii? a qi;,ilititi\c SLIti intc) thC qu,llit:lti:c StatCS that can occur next. ‘l‘hcsc rules arc iisudly run to closure from some initial stat<. pruduclng a description of all the possible states called the _cn\ i~icrnmcnt. ‘I‘he cnvisionmcnt can bc used to answer simple quchtions dircctlp. assirnil;llc ccl tAn global assumptions about m~~ilon, and pian solutions to more complex questions. C~hilc envisioning is useful, it cannot deal with many compllcatcd domains and qucstions.l IIomains whet-c moving ol$xls COl~tillUillly interact. including mcchnnisms such as clocks, are hard to model with qu;rlitativc states bccausc it is hard to build qualit,itlvc% uirnulation rules fclr the motion of a compound object from ~inlulation rules for the motions of its parts. More importantly, quAitativc reasoning is more tllilll just simulation. An cx,irnple that wili bc examined below is dctcrmining whcthcr or not pumping an oscillator will result in stable bchaviol. Such questions rcquirc richer notiuns of quantity, process, and titnc than qualitative state rcprcscntations provide. 3. Qualitath 1: Process Thcorg - Rasics Qualitative Process theory (QP) cxtcnds the ontology of common SL’DSC physical models by adding the notion of a ph\rical 1. [Forbus. 19S2bJ discusses these limitations in more detail. pr!jccss. Proccsscs arc thing5 like flowing, boiling. and stretching that cause changcc in physical situations. (31’ theory provides a language for specifying proscsscs and their cffccts in a way that induces a natural qttalitativc rcprc5cntation for quantities and allows both the deduction of what proccsscs occur in a situation and how they might change. Space permits only a brief sketch of the theory; its prcscnt status is dcscribcd in [Forbus, 1982b]. A situation is composed of objects and relationships bctwccn them. ‘f‘hc cent inous ~ilKllllCtCl3 of an ohjcct. Such as tcnlpcr,lturc and prcssurc. arc rcprcscntcd by ~~latltitics. A quantity consibis iIt‘ two IXll‘tS. 3!1 ;11T101!!1, t ;ll!d a dcri\atitc. each of which has parts u and ma~nitudc (dcnotcd A~. A,,,. Ds. and Dm rcspcctivcly). Wlicn UC wish to rCfCi tc; a quantity or sonic part of it at a particular Liiric (citlicr inst‘lilt or interval), hc write: (M Q t! which mcil11b “the \ aluc of Q mc:!surcd at t”. ‘1%~ dcrivativc of a yuLlntity is dctcrmincd by the sum of the i;lllucnccs an it. A principle tcnct of QI’ Lhcory 1s that only proccsscs C:HISC changes, SO only proccxscs impose influcnccs. Processes als!) can induce functional dcpcndcnccs bctwccn quantities, and taQ Q A) means “thcrc exists i!n increasing monotonic function induced by a process such that Q is functicmall~ dcpcndcnt on at least I<“. aQ- signifies the Si\mC. hut \j ith the implicit function being dccrcasing monotonic. In basic Ql’ thct)ry. the value of a quantity is dcfincd in terms of the incqualltIcs ~h:tt hold hctwccn it and its Otlnntltv Smcc - a partially or&l-cd collcct~on of’ numbers and qiiantitics mainly dctcrmincd b\ the \ocilhtllar\ of nroccsscs for the domain. A A process pi’occss is spccififd 1,; five parts: !nclividll;l!?: dcccr-iptions of the entities the process act:; bctwccn. precondition?: statcmcnts that must be true fi)r the process to act, but not deducible solely within QP theory. ottanlit\j condition<: statcmcnts that must be true ii)r the pr~)ccss to act, but arc deducible wiLllin QP theory. Rckttions: the relationships between the individuals which hold when the process is active. InflucnccF: descriptions of what quantities are aftcctL’d by the process acts bctwccn any collection of individuals it mate :hes, whcnc\cr botll the prcconditious and quantity conditions arc true. Yrccondinons art‘ distinct from quantity conditions because some factors arc Cxtcrni!l to t.hc> physics of a domain - a purely physical theory cannot predict whcthcr or not somconc will walk by and turn on a stove, for instance. although it can predict rllat a result of this &on will bc that the water sirting in the kettle on top of it will soon boil. Object descriptions peculiar to a domain, such as viewing a spring as a Hookc’s law dcvicc, are specified in the same way as processes cxccpt Lucre can be no influcnccs. QJ-’ theory can bc viewed as providing a language for rcprcscnting portions of physical thcorics. In this language objects 205 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. and smiple processes arc primitivcc, with ~harcd parilmctcrs and scq ucn!i,ll occiirancc providing the nicxts of c~tmbination. ,Ibstr;:ctton is proI idcd 1~0th !J!( naming cotnbinariont, and by a type hic,~~Ch;i (,~lrhott$t :hrl notatiotxll cttn\~cntiotts fijr the type hicarchy have not been worked ou:). Gcncr,tl l~fis, such 3s ciicrgy conscrv;ttion and Newton’s laws, can bc viewed as constraints on what proccsscs arc possible. l.ct us SW how thcsc ideas cLtn bc used to rcprcsent tnotion. 4. A \‘oc;M;try for hlotion h simple vocabular4 fat abstract one dimensional motion will serve as an illustr,ttion.l IQire 1 contains the process spcciticationc for motion ;tnd xcclcration. ‘l’hc motion dcscripGon says thitt motion will occur when a tnov,tblc (:bJcct is free in rhc ditcction of its iclocity, and th:tt velocity is tton-7cro. hilotton I$ a posijivc influcncc on the position quantity of an objxt, in ihat if Ihc velocity ts po5iltvc the position will “incrcasc” and it’ ncSatl\ C. IJ~C poGtion will “dccrc;tW”. Acccictxrion occurs whL‘t1 c1 l~lCJv;liJlC r)bLiccl ilaS a non-%U’o tic1 fWcC in ;I ticc direction, and the inilucncc it provides on velocity is qu;tlit,ttivclq pr0portional to the net fiXC and iIl\TrSCly ptUl~~Jrtloll;ll to Iilc lnass of the object. l-YI-iction occurs when thcrc is sitrfacc cotitxt, and produces a force on the ob~ccl L hich is qu;lhLtti\ cly propcWliotial Lt! lllc nornial forCC and am in ;I dircction opposite th:tt of’ tl~c motion (cncotlcd by I- instead of I+). Wh11c this dcacription is NC\4 lt)niiln, .~ristotcli;tn and ltnpctus Uicorics? cljuid a!so bc dcscribcd - 01’ theory constrains the Tel-m of l~ll>SiCiil thcorics, not their content. l;ig. I. l’row~s Ik<criptions of Rlotion ;rrld Acrchxition MotionjE,dir) individuals:(movable-object B) Preconditions: Free-direction(B. dir) QuantityConditions: (greater-than A,[Vel(B)] u) Influences: (I+ Pas(B) Vel(B)) Acceleration(B,dir) individuals:(movable-object B) Preconditions: free-direction(B.dir) QuantityConditions: (greater-than Am[Fnet(B)] a) t?elations: Let Act be a number (a Q ACC Fnet(B)) (aQ- ACC Mass(B)) (correspondence (Act LeJ) (Fnet(Bl ZZLQ)) Influences: (I+ Vel(B) Act) Moving-Friction(B,S) individuals:(movable-object B) (surface S) Preconditions:Sliding-Contact(B.S) QuantityConditions:Motion(E3,along(S)) Relations: Let fr be a number taQ f r Fnormal (B*S)) Influences: (I- Falong(B~S) fr) 1. More detailed representations xc tht targcr of work in the Mcchani5n-t World, \i hich concerns bitnplc dc\ ices such as clocks. Much work remains. cspccidlly in the gcomctric descriptions rcq ui red. 2. (klcClosky, 1382J argue5 nnilc theories of motion in our culture ccrrcspond to i~nlxxr~ C~KOI ies. not at-isrote!ian theories. Collisions are complicarcd. ‘I’he simplest version just involves a lWCt%l of vclocit>, HS illustrarcd in figure 2. Here direction-towards(C,B,dir) ;IsscI'IS thnt Ihc lit>.icct is 1110viIlg in ditcctil~ll dir tixm C to 6. start, end. during iltld duration dCfillC lhC tc-mporal ,tspccts of an cpisodc in a proccS hictory tb;tt currcsponds to this process occuring. l<vcn our more complicated tnodcls of collisions appear to use such bchitk iot4 descriptions. such as H conipor~tid process consisting of conlxting the surface. compression, cxpansic~tl, ;tnd finally breaking contnct. ‘I‘hc rypc of collision which occurs can bc spccificd by refcring to the theory of nl;\tctLtls of the objects involved. ‘I’hc process vocabulary for motion prcscntcd above is quite nbstctct. ‘I‘hc particul;lr kind of motion - flyins, sltding. rolling, or s14 inging - is not mctitioncd. ‘I’hcW motion5 \+ 3uld bc spccinlizations of the motion process considcrcd ilbO\ C, dctincd by additioIli4 prcconditiotx and rcl,tlions (sliding and rolling rcquirc surface contact ;tnd could invol\c friction. tix InsLtncc). I’hc x~vantagc of hmirlg ~hc abstract description as well ;IS the more dctailcd OIXS is th;tc Lvcak conclusions cat1 be draMin CICII with litrlc tnformation. If WC kick something and it isn't blocked. fijr I~SLI~CC, then it will move. Now WC cittl cxitminc L~C relationship bctwccn this rcprcscnt;ttion of niolion and Lhc Qtt.tlitati\ c St;W rcprcscntation. If WC ;ISSIIWC tnoGon and itcccl~t‘ittion ilrc 111~ o~I!’ proccsscs that occur, then the limit itttillqsis fi)r ;t mo\ tttg objccl will only include the possibilirics raiscti by d~tlilt1ltC!3. ‘1‘0 include the pos4blc changes in process c,~uscd by kincmiltics (ix.. hilling sotnc~hing) the rclcvant gconictr~ of the situation must bc rciimcd f’tml the preconditions and 111‘1l>l~d I~LO it Qttmtity S~XC. ‘l’his rcquircs dcscrihing space by il plXc VOC~lb~ilill-1). 1 illlti using Lhc clcmcnts in Lhc place vocabulary as Il?c ClctncnK in IhC posiliim @anliLy SpilCC. ‘1’0 induct an ordering bctH cm Ihc clct~icnLs for tmJLion in LMO and lhrcc dimensions a dirccrion must alS(J bc included in the process description, since partial orders i\rc OIIIY Kcll-dc!incd for one dimension. I‘hc atnbiguit} due to dimcnsiotl;tlity md symbolic hcadin~ can bc encoded by the lack of ordcrine IWWCCI~ the Quantity SpilCC ClCtllCllt~. ‘I‘his also nlcitns the pl.tcc tnust by cncodcd in the nmion process. which in [urn ntc;tns th,it an tnctancc of a motion proccrs in this vl~cLlbttlary will look like a C)ltitlit:tti\‘c State for the WilC collection of l>lilCCS iltld type of tnotion. ‘I‘hC qualitative sitnttlittion rules correspond Lo a compilation of the limit analysis on this ncu motion Vl~Cilbtllary. Frotn this pcrspcctivc WC can xc the rclntivc strengths of the two rCprcsclltilti0tK. l-or c\olving motion dccct-iptions the qualitative stlltc rcpmcntatiotl makes scnsc, sinsc kinctnatic Fig. 2. Collision Specification Collide(B,C.dir) Individuals: (movable object 8) (immobile object C) Precondition: (and contact(k3.C) direction-towards(B,C,dir)) QuantityCondition: Motion(B,dir) Relations: (= (M VeltB) start) (- (M Vel(B) end))) (= (M Vel(B) during) zero) (= duration zero) (T direction-towards(C,B,dir) end) (T contact(B,C) end) : (T <statement> <time>) means ; "<statement> is true during <time>" 1. [Forbct\. 1981;1] clcscrrh~s the princiljlcs tnlolvtd aud defines a plxc \ ocabul;tr!~ for mo:iott thr(lu!Ji spx,: in ;I ~~t~~plc domain. 206 constraints arc essential to motion. Its “compiled” nature makes quslitativc stares in,lppropriate for very simple dcduclions (where nniy part of a qualit:iti\c st;ltc is known) and more complex questions involving dynvmics or compound Sq'SlClll~. ‘I’hc ricxt section illustrates the kind of detailed analysis made possible by the QP description of motion. 5. An Oscillator Consider the block 13 conncctcd to the spring S in figure 3. Suppose that the block is pulled back so that the spring is cxtcndcd. Assume also that the contact bctwccn the block and the floor is frictionlcss. What happens? First, the spring object includes: Relations: Ds(Lrest(s))=O ;i.e., Lrest is constant Let Oispjs)=(- A(L(s)) L,e,t(s)) ta - Fi(S) Disp(s)) (co?respondence (Fi(S) w) (Disp(s) w)) ahcrc F: is the internal force due to the composition of the spring. Since Disp( s) is grcatcr than D. the spring will cxcrt a force. I~ccausc the block is rigidly conncctcd to the spring, the net force on ti?C block will bc ncgativc and since the block is free to mo~c in the direction of the force. an acccllcration will occur. ‘l’hc nccclcration will ill turn c~usc rhc vcloolty Lo mo~c from /CFO. which will in turn CAIFC Ds(Pos(B))=-1. IIy rigid contact, DS(L(S))=-1 ~lnd by tbe a Q relation with displaccmcnt. us ( Fnet (8) ) = 1. ‘l’hc proccsscs occuring arc ~m~t~on(l~, -), rclaxing(S, -). act:clcration(II, -). ‘l‘hc next process litnit occurs when L(s)=L rest(s), ending Lhc relaxing. ‘Ihc corrcspondcncc Lclls us the ti~cc on the block l~ccomcs /cro so the accclcl,ltion will end as well. IHowcvcr, the motion dots not.‘Sctting a\ldc Lhc dcl;Gls, the next set of proccs\cs <IIT Moti~~n(lI, -), comprcb$,iS j. and accclcl ation( i$ + ). ‘I hc only iilnll point in Lhc qu;lntitq spaces Lhat arc changing is the /cm velocity point (assuming the spring is unbrcakablc). so the motion will continltc until the v&city is XI-O. l‘hc conclusion that the next set of proccsscs arc Motion( 13, + ), rclaxing(S, + ). accclcrntion(l3. + ) and then Motion(H, -t ), strcLching(S. i ). accclcration(l~, -) fiAlows in the same way. At the end cvcnt of the last set of‘proccsscs, the orderings on the quantity spaces and the proccsscs evoked arc the same as the il;itiA instant. ‘I‘hus WC c;m conclude that an oscillation is occuring. Note that tIlC pr0CCSSCS IlCCd W bC tIlC SillilC. bCCLIllSC the prcconditlons might IlilVC changed. J;igurc 3 illuslratcs the process history for ~hc oscillator. An additional complexity IS introduced if WC allow for propcrtics of materials. such as 1.11~ spring being brcakablc. ?l~ ITlCVilIlt CffCClS Of material composition can bc modcllcd by introducing clcments into the force quantity SpilCC for the spring corrccponding to the oc‘cu ranccs of proccsscs such as breaking and crushing, in addition to Lhosc for strcL&ing and complcssing. It appeals that a11 assumption is nccdcd to rule out crushing at t3, but brcrking ~‘~11 hc ruled out by an cncrsy argulncnt (csscntially. cncrgy considerAons lc~i Lo the cclllclusiou th;tt the position of the block at t5 is no g.1 c<ttcr than the p:GLlon at tI. so Lh,lt if it didn’t brcnk then it WOII’~ hrcak lnlcr). ‘l’hc dct:lils can bc found in [l~orbus, 1982b]. ‘1 0 further an;rlyzc thi> sysLcm. wc must treat the collection of objects ils ;1 system & the proccqccs th,it occur as ;I conlpound proccs\. Kcplcccnting dw c~NIblIl~lti~JIl illlOWS Lhc cxpljcit rcprcscnlaLion of‘ propcrtics bclonping to the collection, such as the cncrgk of the system. and propcrtics dclincd over ‘I cycle of the combination. such as cncrgy losl dnd IlIdxiInurn displaccmcnt. We can then dcrcrmiiic l!ic conscqucnccs of perturbing the situation in various u,~gs. III \>i:rtlcular, the rcl,itions for the compound process include: (OcQ MaxDisp(Dbj) F(System)) (correspondence (MaxDisp(Obj) zero) (E(system) w)) ‘l‘hi4 rclarionship makes it possible to dcducc that if friction were introduced (i.e.. Ds(E(system))=-1) LllC O?XiIIk)ll l~roccss will cIcl\tirJlly Stop, aud IhAt if Lh\: QStCln i:, ~,Uilti>Cd so illirt its ClXrgy incrcascs (i.e.. D~( E( system) )=I). that thir materials involved in the Fig. 3. Sliding Mock With Friction I Tl / I T2 -7 T4 Fig. 4. Prows history for the oscillator \1 Time 207 osci!lstor may break in some way.’ Suppose for cxamplc the osciliator is subject to friction. but WC pump it wih some fixed amount of cncrgy per cycle. as would hnppcn in a mechanism such as a 4ock. Is ~ch a system srablc? ‘I’hc only things wc will assuine about the friction proccxs in the system is that Relations:(aQ E(loss) E(System)) (correspondence (E(loss) w) (E(System) n)) Influences:(I- E(System) E(loss)) whcrc E ( 1 OS S) is the net cncrgy lost due to friction over a cycle of the oscillator process. ‘I’hc loss being qunlitlrtively proportional to the energy is based on the fact that the cncrgy lost by friction is proportional to the distnncc travcllcd, which in turn is proportional to the maximum displaccmcnt. which itself is qualitatively proportional to the energy of rhc system. as strltcd above. ‘t‘hc Iowcr bound fijr the cncrgy of the system is u, and an upper bound for cncrgy is implicit in the possibility of the parts brcakinp. ‘I hc result, via the aQ stntcmcnt above. is a set of limits on the qllantity sp‘*cc for E( loss). If wc assume E(pump). the energy uhich is nddcd to the system over a cycle. is within this boundary then there will bc a ulue for E(System). call it E( stable), such that: V t E intervals (implies (= (M E(System) t) (M E(stable) t)) (= CM E(loss) t) (M E(pumP) t))) Note rhat E( stab1 e) is unique because a0 is monotonic. If the encrgv of the syacm is at this poirlt, the inilucnccs of friction and pumping M 111 cnnccl ,lnd the sys~cm will stay at this cncrgy. Suppose :> (M C(System) t) (M E(stable) t)) OVCI- some cycle. ‘l‘hcn hcc~~tisc the loss is qualitatively proportional to the cncrgy, the cncrgy loss will bc grc,ttcr than the cncrgy gained by pumping, i.c., Do ( E (sys tem) )=-I, and the energy will drop until it rcachcs t(stahl~). Siinilarly. if F(System) is ksk than E(stahle) the influcncc of’ friction on the cncrgy WIII bc less tll:rn tjut of the pumpin& thilS Cs(E(System))=l. ‘I his will continue until the energy !>f thC SySWm iS qii3 CqLi3l to E(stable). ‘I’hcrcfore for any pir’ticular pumprng cncrgy there will be a stiiblc oscillation point. ‘i’his iq a qualitarivc vcr$ion of the Froof of the cxistcncc and stability of limit cycles in the solution of non-linear diffcrcntial equations. 6. Conclusions l‘his paper has illustrated how motion can bc rcprcscnted using Qurtlitative Yroccss theory. As the cxamplc indicates, the notions of quantity and PI-occss it provides allows useful deductions about svstcms involving motion to be made. The previous Qualit;ltivc State rcprcseniation for motion can bc vicwcd as a sm~piiflcd process vocabulary whcrc kinematic information has been inrludcd, and qu4itativc simulation rules can bc viewed as a compilation of the limit analysis 011 this vocabulary. ‘Ihis suggests thilt for some pu~poscs Qualitative Sta~cs \f*ill be more uscf%l, in that the mud Qf’ theory limit analysis ulll only cncodc changes due to dynamics. not kinematics. It should be poscihle to smoothly merge the two representations, u~g the QP description to decide on the type of motion, the Qualitati! c State rcprcsentation to determine the motions possible, and the QP dcscriptilm to prokidc more subtle analysis to choose bctwccn the altcrnatibes of the qualitative simulaticn as well as examine other kinds of questions. This of course is a Lopic for future consideration. 7. References de Kleer, Johan “Quaiitau\e and Quantitative Knowledge in Classical Mechanics” ‘1X-352, MIT AI Lab; Cambrjdge, 1. The Tacoma Narrows eng;nccBr should know about. bridge phenomena, something every Massachusetts, 1975 Forbus, K. “A Study of Qualitative and Geoinctric Knowledge in Reasoning ahout M4on” ‘IX-615. MI?’ AI Lab, Cambridge, Massnchusctts, February. 1981 Forbus, K. “Qunlitntivc licasoning about Physical Processes” Proceedings of IJCAI-7, 1981 Forbus, K. “Qu;rlitativc Process Theory” MIT Al Lab Memo MO. 661,’ 1-c bruary 1982 l;(y.l?l: j K. “Qua!itativc Reasoning about Space and Motion” to a~:;xx~: in Mental M(>dcls ll.‘Gcnt~m and A. Stcvcns, editors. -A. tiaycs, Patrick J. “‘l’hc N,livc l’hbsics Manifesto” in Fxocrt SvstcmS b & Microclcctronic w. cditcd by I>. Michic, Lldinburgh University Press. May 1979 Hayes. Patrick J. “Naive Physics 1 - Ontology for I.iquids” Memo, Ccntrc pour ICC ctudcs Scmantiqucs et Cognitivcs. Gcncva, 1979 McCloskcy, M. “Naive l‘hcorics of Motion” to appear in Mental h/lodcls, I). Gentncr and A. Stevens, editors. 208
1982
74
172
GETTING THE ENVISIONMENT RIGHT Benjamin Kuipers Tufts University Medford, Massachusetts 02155. ABSTRACT The central component of commonsense reasoning about causdity is the envisionment: a description of the behavior of a phvsical system that is derived from its structural description by qualitative simulation. Two problems with creating the envisionmcnt are the qualitative representation of quentlty and the detection of previously-unsuspcctcd points ot qualitative change. The representation presetlted here has the expressive power of differenil;ll equations, and the qualitarive envisionment strategy needed ior commonsense knowledge. A detailed example shows IICW it is able to detect a previously unsuspected point at which the system is in stable equilibrium. THE ENVISIONblENT Causal reasoning --- the ability to reason about how things work ___ is central to expert performance at problem-solving and expinnatii:n in many different areas. The concept of an PiJV!SiOfJ/J?ent, developed by de Kleer [l&Z]. has provided a framework for most subsequent research on causal reasoning. A physical svstern is described by a structural descr/pf:on consisting of the context-independent behavioral repertoires of its individual colIlpowlts, and their connections in this ccntext. The envisionment describes the potential behaviors of the systam, and is produced by qualitative simulation of the structural description. It can be used directly to solve problems or answer questions, or can be turther analyzed to produce a functional description to explain wiry the system works the way it does. However, there is significant disagreement on several key questions about the structure ot the envislonment process: (I) How should continuously variable quantities be described for qualitative simulation? (2) How should the envisionment detect previously unsuspected points at which qualitatively significant changes take place? about quantities takes place, but remains agnostic about its properties. None of the above systems use qualitative reasoning to discover previously unsuspected points where qualitatively significant changes take place, although de Kleer’s “roller coaster” envisionment [l] is able to localize a change within a region before turning the problem over to a quantitative problem-solver. in this paper, I present a simple but very general descriptive language for structural descriptions. and a qualitative simulation process for producing the envisionment. Within the causal structure description, a system is described as a collection of ~onslrci/!;:~ holding among timt’-~arylny, real-valued quar)fities. A value is a description of the real number corresponding to a quantity at a given tlnle-point. Thrs description consists of the ordinal relations holding amonq the dlffe:cnt values known to the envisionment, and the 10 value (the sign of the tirne derivative: + , 0, -) of the quantity at that time-point. II constraint consists of rules for propagating information describmg the current value among the values of the related qu’lniiiies. The mechanism is inspired by the scheme developed by Steele [S], modified to propagate ordinal and IQ value assertions r&her than integers. The three types of constraints used in the example below are: Arithmetic. (X = Y + Z) The values of tile quantities must have the indicated relationship withln any time-point. Functional: (Y = M+(X)) Y is a strictly increasing (or decreasing (M-)) function of X. M+, indicates that zero corresponds to zero. Derivative: (Y = 6 X) At any time-point, Y is the rate of change of x. The envisionment consists of a finite set of time-points representing the qualitatively distinct states of the system, and values for each quantity at each time-point. Thus, the set of values that are part of the envisionment, and the ordinal relations De Kleer [I ,2] does qualitative perturbation analysis by describing that hold among them, can only increase as new inforrnation is . quantltles 111 terms of the.sign of the derivative (the IQ value) alone, but this is clearly too wean for other kinds of causal reasoning. Forbus [3] gets considerably greater power by representing each quantity in terms of the sign and magnitude of both tts amount and Its derivative. In practice, the power of his system depends only on the ordinal relations among quantities. Hayes [4] defines a modular quantuy space in which inference propagated across constraints from quantity to quantity: the qualitative simulation is monotonic. The qualitative simulation propagates inforrnation across the constraints to complete the description of the state of the system at the cllrrent time-point. After the propagation of information among values has settled down. the envisionment process examines the set of changing values In tl-le curlent time-point to determine the next qualitatively distmct state. Deiermining the This research was supported in part by NIH Grant LM 03603 from the Nnrional Library of Medicine. next state depends critically on the concept of distinguished value. Initially, zero is the only distinguished value, but if the IQ vaiue of a quantity becomes zero. that value (a critical point) 209 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. becomes a new distinguished value. Two of the rules (the system currently has seven) for creating a new time-point to represent the next qualitatively distinct state of the system are: If the current value of a changing quantity is not Move To Limit: distinguished. and there is a distinguished vallrc in the direction of change. let the value of that quantity in the next time-point be the next distinguished value. Move From Point: If the current value of a changing quantity is distinguished, then let the next value be an undistinguished value in the given direction, closer to the starting point than any other distinguished value. When the description of the system’s current state is not sufficiently complete to determine the next state uniquely, the envisionment branches on the possible states of a particular IQ value or ordinal relation. If the qualitative simulation is unable to proceed, it may summarize the structural description to reduce its complexity at the cost of losing some information. The simulation terminates upon recognizing a contradiction, intractible branching, a cycle, or a quiescent system. When all the qualitatively significant points are specified in advance, Forbus [3) shows how an envrsronment process can determine the possible behaviors of a system. In order to demonstrate the power of this descriptive language and simulation process, the following example shows how, without cxtcrnat information, the simulation process deduces the existence of a previously-unsuspected distinguished value, and shows that a system moves to a stable equilibrium about that value. Consider a simple physical system consisting of a closed container of gas (at temperature T) that receives heat from a source (at Ts) and radiates heat into the air (T,). The problem is to deduce the existence of an equilibrium temperature (Te) between the temperatures of the heat source and the air, and to show that the system moves to a stable equilibrium about that temperature. Tables 1, 2, and 3 show the different stages of the qualitative simulation as it creates the envisionment.’ Table 1 shows how the envisionment of the double-flow system branches in order to derive missing IQ values, how a new distinguished point is discovered on one of the branches, and how a set of corresponding values is discovered when several quantities take on dlstingurshed values simultaneously. Table 2 shows how the structural description is summarized when the first envisionment bogs down at an intractible branch. Table 3 shows how the summarized structural description, and the newly-discovered correspondence, allow the successor time-points on the remaining two branches to be determined uniquely so the envisionment can be completed. Diagnosis of a stable equilibrium takes place using the final envisionment structure, by showing that a perturbation from the final quiescent state places the system into one of the previously described states from which there is a restoring change. 1. The envisionment diagrams (Tables 1 and 3) are read from top to bottom, each line following from those above. Each cell corresponds to a single time-point. Time progresses from top to bottom, and alternate branches are side by side. Stepping back to consider the general problem of representing commonsense knowledge of causality in physical systems, it is useful to highlight certain points. (1) The structural description language has approximately the expressive power of differential equations, plus the ability to specify functional constraints as additional states of partial knowledge of a relationship. (2) The description of quantitie s in terms of ordinal assertions and IQ values provides a qualitattve representation capable of high resolution where the problem demands it, and very low resolution elsewhere, without requiring a premature commitment about wtrich values shorlld be distmguished. (3) Tha accuracy and compactness ot the envisionment depends on the set of distinguished values that indicate potential qualitative changes. The ability io create new distinguished values corresponding to critical points of the time-varying quantities is important to discovering previously unsuspected points of qualitative change, and avoids the need for premature commitments. (4) Time is represented explicitly by !he structure of the set of time-points, rather than implicitly in the dynamic behavior of the simulator, so the value of each quantity at each time-point is tied into the network of ordinal assertions. (5) Each inference is irrevocable, so the state of knowledge becomes monotonically better specified as the simulation runs. At the time this is written, the propagation, envisionment, and summarization components have been completely imp!cmentcd but the perturbation analysis of the stable equrlibrium is done by hand. This paper is a summary of [5], which provides a complete specification for the represeniation and qualitative simulation. ACKNOWLEDGEMENTS AND REFERENCES Christopher Eliot is responsible for the implementation of this system, and has made substantial contributions to its design. Ken Church. Ken Forbus, Ramesh Patil. and Peter Szolovits have also provided helpful comments. [i] J. de Kleer. 1977. Multiple representations of knowledge in a mechanics problem-solver. Proceedings of the Fifth International Joint Conference on Artificial Intelligence. Cambridge, Mass. [2] J. de Kleer. 1979. The origin and resolution of ambiguities in causal arguments. Fi-oceedings of the Sixth International Joint Conference on Artificial Intelligence. Tokyo, Japan. [3] K. D. Forbus. 1981. Qualitative reasoning about physical processes. Proceedings of the Seventh International Joint Conference on Artificial Intelligence. Vancouver, B.C. [4] P. J. Hayes. 1978. Naive physics I: Ontology for liquids. Department of Computer Science, University of Essex, unpublished manuscript. [5] B. Kuipers. 1982. Representing the Structure of Causal Relationships. Medford, MA: Tufts University Working Papers in Cognitive Science, No. 18. [G] C. L. Steclc, Jr. 7930. The definition and implementation of a computer programming language based on constraints. MIT Artificial Intelligence Laboratory TR-595. 210 Table 1. Double heat-flow. * The structural description of the heat-flow system is shown in Table 2(a). * In time-point ( l), starting with the condition that T, < T < T,, ordinal assertions propagate through the network, but fail to provide information about net f 1 ow. * In order to allow the derivative constraint to derive IQ values, the envisionment is split into cases according to the sign of net, f 1 ow. In the branches, with net f 1 ow specified, IQ values propagate through the network to complete the description. * Time-point ( 1E) is quiescent, with all IQ values steady, so new distinguished values are created, and the correspondence between quantitres taking on distinguished values is recorded. (net flow: 0) <=> (inflow: flow*) <=> (outflow: flow*) <=> (ATa: AT,*) <=> (ATs:ATs*) <=> (T: Te) * Time-points (1G) and (1L) each contain six changing values. However, not enough is known to show that they arrive at their limits srmultaneously, making the required case split intractibly large, so the envisionment halts. constant(Ta) const,ant( Ts) ~-----------------_-~----~-----------~~~~~~~~~~~~~~~~~~~~----~~~~~~~~ ----------------_---~~----~------~----~~~~~~~~~~~~~~~~~-~----~~~~~~~~ (1) T, < T < T, ATa > 0 AT, > 0 outflow > 0 inflow > 0 net flow = unknown ---------------------------------~----------------------------------- Case Split: relation(net flow, 0) --------------------------------------------------------------------- (1G) I ( 1L) I(lE) net flow > 0 I net flow < 0 I net flow = 0 inflow > outflow > 0 1 0 < inflow < outflow i inflow = outflow > 0 T, < 1 < T, 1 Ta < T < T, 1 T, < T < T, ATa, AT, > 0 I ATa* ATs > 0 I ATa- AT, > 0 increasing(T) I decreasing(T) I steady(T) increasing(ATa) 1 decreasing(ATa) I steady(ATa) increasing(outflow) I increasing(outflow) I steaciy( outf 1 ow) decreasing(ATs) i increasing(ATs) I steady(ATs) decreasing( inflow) I increasing( inflow) I steady( inflow) decreasing(net flow) I incr eas ing( net flow) I steady( net f 1 ow) I I ----------_------------- --------------------___I I ------ ------=================I ==================== Table 1. Envisioning the double heat-flow system. Table 2(a,b): The arithmetic and functional parts of the causal structure description are simplified in three steps, applying the following simplification rules. (a-b) x+y=z & constant(y) => z = M+(x) (a-b) x+y=z & constant(z) q > y = M-(x) (b-c) y = M+(M+(x)) => y = M+(x) (b-c) y = M-(M+(x)) => y = M-(x) (c-d) y = M-(x) - M+(x) => y = M-(x) 211 T Table 2(c,d): The final step in summarizing the heat-flow description. The resulting structural description is less informntiv e than the original, but equally valid. Table 3. Summarized heat-flow. * The summarized structural description is shown in Table 2(d). * In time-point (l), ordinal assertions propagate as before, and the need for IQ values prompts a case split. * Time-point ( 1E) is quiescent as before. * The previously-determined correspondence makes it possible to infer the relation between T and T, in time-points ( 1G) and ( 1L). * Since time-points (1G) and ( 1L) each con t&n oniy two changing quantities and their limits are known to correspond, their subsequent states, (ZG) and (ZL), are easily and unambiguously determined by the Move To Limit rule. * Since the three branches of the split have identical end states, they are joined to create state (2). (T: Te) <=> (net flow: 0) ~-~~~~~~~~--~~--------~-~-----~~~~~~~~~~~~~-~~~-~~~~-~-~~---~~~~~~~~ ---------------------------------------------------------------~---- (1) T, < T, < T, T, < T < T, net flow = unknown ~~~~-~_~----~---------------------------------~------------------~~~ Case Split: relation(net flow, 0) -------------------------------------------------------------------- (1G) I ( IL) ItIE) net flow > 0 I net flow < 0 I net flow = 0 T, < T < T, 1 T, < T < T, 1 T=T, increasing(T) I decreasing(T) I steady(T) decreasing( net flow) I increasing{ net flow) 1 steady( net flow) I I -----^------------------ I======---- ------------- I =====---- ---------- ------------------------ ----------------- -------------- (26) I (2L) T = T, 1 T=T, I net flow = 0 I net flow = 0 steady(T) steady(T) I steady(net flow) I steady(net flow) I ----1---1--------------- ------------------------ ---=---================ I--- --- I -------------------------------------------------------------------- Case Join: identical outcomes on all branches -------------------------------------------------------------------- (2) net flow = 0 T = T, steady(T) steady(net flow) -_-----------------------------~---~---~--~~~~~~-~-~-~~~~-~~~~~~~~~~ ---.-----------------_________^__________---------------------------- Table 3. Envisionment of the summarized double heat-flow description. 212
1982
75
173
The QEKG System: Generating Explanations from a Non-Discrete Knowledge Representation1 Hans J. Rcrhcr David H. Ackley Lkparinmt qf Computer Science C’arrqie-Mellon University Pittsburgh. PA 1.5213 Abstract The QI3KG system produces critical analyses of possible mo\ es fiir a uidc \ar~cty of backgammon positions, using a hicrarchicaliy su-ucturcd. non-discrete folm of knowledge rcprcscntation. ‘I‘hiq report compares discrete and continuous representations and reasoning systems, addressing issues of competence, robustness, and explainability. ‘l‘he QLWG system is described and demonstrated. Discrete systems vs Continuous systems Most work in knowlcdgc representation for artificial intelligence systems has used some variety of “discrctc” representation and control structure, from the condition-action rules of production systems [9], to a variety of frame-based systems [I 1. 41, to various sorts of semantic network [IO, 71. Thcsc systems have in common the property that at any given time there is an unequivocal distinction bctwccn what knowlcdgc is rekvarzt and what is not (with rclcvancc criteria such as “those productions whose condition portion is satisfied”, “those scripts that arc activated”, or “those nodes with marker 2 set.“) This all-or- none assumption incrcascs the cficicncy of thcss systems by reducing the cffcctivc size of the knowlcdgc base, and makes construction of the knowlcdgc base simpler by guaranteeing modularity. The price cxactcd for this simplicity can be high. however, in terms of system behavior. As discussed below. such systems tend towards anomalous behavior in certain circumstances, and are typically very sensitive to noise. Some more recent work with thcsc rcprcscntations has centered on relaxing the all-or-none assumption in various ways, such as allowing for partial matches in the condition-part of production rules [2] and the various spreading-activation thcorics in semantic nets [5]. Another reason that discrctc systems seem natural stems from the fact that all systems must cvcntually make basically all-or-none decisions about their actions. The traditional view seems to assume that the discrctcncss of the ultimate action implies that it will have discrctc justifications. with discrctc reasons for the justifications and so on. until the discrctc inputs arc rcachcd. A major altcrnativc scheme was advanced by workers intcrcstcd in game-playing systems, in the construction of knowlcdgc-intcnsivc evaluation functions for games such as backgammon, whcrc the clcmcnt of chance introduces a 1. ihlq march WI!, s~onaorcd by the lkfcnsc Advanced Kcscnrch Projects Agency (1 )()I)). AftPA Order Ko. 35%‘. monttolcd by 111~ Au IWcc Avionics laboratory Under Contim 1:33hlS-78-C-1551, hrnnching factor that makes suh<tantial exhaustive starch infeasible. In Lliis p*t,~Ji~nl, kl~in+IcJ& i:, I.CpI.CSci~lCd bq appropriate mathematical combination of‘ obscr\i;ttions on the world, and the control mechanism is the evaluation of the overall function on a sclcction of legal successor states, intcrpretcd as a “gathering of evidence” procedure. Discretization is held off until the last possible moment. when the evaluations of the successor states are compared and the one with the largest evaluation is chosen. In this scheme, the rclcvancc criterion for a knowledge item is fLj77y. with potentially all of the knowledge base implicated to some degree in each evaluation. For a large enough kno\vlcdgc base. this might suggest that implementation on a uniprocescor would be slow: however, the structure is well suited to a parallel implementation. It should be noted that even the choice of “discrete” versus “continuous” reprcscntations is not a discrete choice: in fact. there are varying dcgrccs of continuity possible, from the two-valued propositional rcprcscntaLion (e.g., “John is an adult” vs “John is not an adult”), to a finer grain representation (“John is in the 18 to 34 group”), to an essentially continuous rcprcscntation (“John is 26.d87 years old”). In many casts, the two-valued approach seems complctcly adequate (“John is a malt”), and it seems easy to ignore the odd boundary cases that occasionally crop up. Unfortunately, in many arcas of practical intcrcst there is a wide “gray arca” between the extremes (e.g., for age, duration, size, shape, color, beliefs, desires). and two-state (or n-state, for small2 n) systems tend to manifest undesirable behavior near the boundaries bctwccn the states. In society. for example, discre&ion leads to surprising behaviors such as prctcnding to bc “over 18”, or driving from Massachusetts to New Hampshire to buy liquor. Rcrliner has shown [3] that Samuel’s USC of non-linearity in an attempt to improve his checkers program probably foundcrcd on this problem. ‘I’hc important point is that too large a grain size for an observation can have a disastrous effect on system behavior. The extra information in a fine-graincd observation can always bc discarded higher in the knowlcdgc structure if it is not nccdcd, such as when the digital watch tells us it is “4:56:34” and we think “five o’clock.” In other circumstances. such as timing an egg, WC would maintain a finer grain. It is clear that the nccdcd grain size varies depending on the task at 2 “s&r’ hcrc ilnplying fial large mmbcrs of usefully distinguishable obwvations SC bciug “buckctcd” to@hcr. 213 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. hand. In some circumstances, the simple two-state assumption is adequate, but in general, no a priori grain size assumption can be made. The QBKG system The cxplsnation mechanism of Q13KG must handle two main issues. First, it must isolate the backgammon knowlcdgc rclcvant to any particular query from the (usually large amount of) knowledge that does not bear on a given situation. The second issue is to pro\,idc some mechanism for deciding when quantitative changes should be viewed as qualitative change; in essence, to provide the judgcmcntal ability that discrete systems enjoy by virtue of their all-or-none assumption. (There was rclativcly little effort cxpcndcd in the gcncration of natural language output: in the cxamplc below, the output has been left “in the rough” as the system gencratcd it. Most “language issues” have been ignored.) QBKG is oricntcd around answering the question “Why did you make lhaf move. as opposed to rhis move?” This rcduccs the explanation task to one of accounting for the d[ffermce.s between a pair of moves. The evaluation tinction is structured in. a hierarchical f&ion as shown in Figure 1. At the lcavcs of the tree arc prin7iIive ubsmafions (Prim in Figure 1) constituting the system’s source of knowlcdgc about the world. ‘I’hc primitives arc combined into cnr~ccppr.s using a variety of mathematical operators provided by the system. IWatcd concepts arc collcctcd higher in the tree by scaling each concept non-linearly by multiplying by an applica/ion coeficient (AC) [3] and summing the results to produce a new concept. This can bc thought of as a unit conversion operation: the subconccpt is convcrtcd into units of the concept, with the application coefficient giving the current conversion rate. In the case of IQ test scores, for cxamplc, units of raw score are convcrtcd into units of IQ by multiplying by l/min(a~P,cu/r~~vr~lur).4 In QIIKG, ultimately all values arc converted into units of hew-is/k vaZue and a single value, Hcur, is available at the top of the tree. For this reason, discrete systems tend to bc fragile in the face of noisy or erroneous inputs. Simply put, in a two-valued system, if you arc wrong, you arc VCI-J wrong. In a discrete medical diagnosis system using production rules, for example, an erroneous result on a test could prevent the system from cvcr making an accurate diagnosis, because the knowledge relating to the actual discasc is not used, due to the non-satisfaction of the condition portions of the rclcvant productions. This could lead to an inaccurate diagnosis (or no diagnosis at all) despite a prcpondcrancc of evidcncc, cxccpting one test. pointing to the actual malady.’ Shifting to a more continuous reprcscntation can alleviate both of thcsc problems. l’hc boundary problem is handled by removing the hard boundaries and replacing them with non-linear functions which provide context-sensitivity (as in the way that scores on IQ tests are divided by the subject’s age. as opposed to, say. intcrprcting the raw scores on diffcrcnt scales dcpcnding on whcthcr the subject is over or under 10 years old). ‘l’hc fragility problem is handled in two ways: on the one hand. input error or unccrtajnty is not magnified by the “bucketing” process, and on the other hand, the “gathering of evidence” control structure ensures that the most reasonable hypothesis based on all available data will not be missed due to a small miscue. One strong advantage of some discrctc systems is that they arc very well suited to the task of explaining what they arc doing, a task at which humans arc frequently quite adept. Given a subgoal structure, it is very simple to explain H+JJ a particular fact is nccdcd (to prove the next higher goal in the structure) and haw a particular fact is to be cstablishcd (by proving all nccdcd subgoals immcdiatcly below.) [12,6] Given only a continuous evaluation f%nction, it is not immcdiatcly clear how to explain why one evaluation is bcttcr than another or what the significance of a particular observation is in the overall scheme of evaluation. ‘l’hc QRKG system is an cxamplc of a continuous knowledge rcprcscntation system that plays backgammon and provides a mechanism for explaining some of what it dots. It is derived from the I3KG system, which dcmonstratcd cxpcrt-lcvcl abilities in human competition and introduced SNAC [3], which forms the basis of the mclhod used for structuring knowlcdgc in Ql3KG. ‘l‘his paper prcscnts the high-lcvcl issues addrcsscd by the system and dcscribcs the fundamental mechanisms used. For a more cxtcnsivc treatment of the system, discussion of the limitations of the m&hod, and possible cxtcnsions to 3 learning system, see [l]. The fimdamental assumption of the explanation process is that important diffcrcnccs bctwcen a ‘pair of moves will be rcflccted by “large” changes in the values of the highest level concepts that are related to the di@crcnccs. Letting illo~e/ denote the move with the larger Hcur and Move2 the one with the smaller, define Gconcepr = value of concept for Alo~el - value of co~ccyt for AIo~e2. Rcfcrring to Figure 1, this assumption implies that if 6Blocking is “small”, then backgammon knowlcdgc rclatcd to blocking is not relevant to this comparison and should not be mcntioncd. If only one subconccpt of Hcur, say, ‘l‘actical, is not small, then all intcrcsling differences are with rcspcct to Tactical concepts, and the level of discourse for comparison can be narrowed to just tactical knowledge. This is the method by which the rclcvant backgammon knowledge is isolated. I3cginning at Iicur, the system scarchcs down the tree until a lcvcl is rcachcd at which more than one significant diffcrcncc is found. As dcsircd, if the two moves arc radically dificrcnt in their cffccts. the commentary will begin at a rclativcly abstract lcvcl (e.g., tactical and 214 Hcur:Coll Blocking: CollCiIl Positional: Coil * ‘I * Tact\+‘@: AC ‘I PosWgt: AC /* PrimcWgt:AC I PiplXfkrcnce II MyPipcount:Prim ~‘0urPipcount:Pri~n Figure 1: Schctnatic view of portions of lhc QHKG knowledge base. positional issues) and if the two moves are quite similar, the commentary will focus on the crucial diffcrenccs at whatcvcr Icbcl they are found. If the discussion is at a fairly narrow Icvcl, it is usually suficicnt to just dcscribc the diff‘crcnccs and their magnirudcs; at higher Icvcls, this lcads to unsatisfying. “hand waving” commcntarics. The Icvcl at which an explanation feels satisfying varies from person to person and topic to topic’, so we have adopted a simple heuristic. The broad concepts at the top of the tree arc dcnotcd co/le~/iniu (C’oll in Figure l), and the system is built to automatically “look inside” of any collections that are mentioned; in ctfcct, supplying for free the question “Why is there a difference in that collection?” ‘The above discussion is predicated upon having the ability to recognize “large” or “significant” diffcrcnccs in the values of concepts. In a two-valued system, any diffcrcnce is a large one (on the order of True versus F&e)), and the process of recognizing significant diffcrcnces is done oufsidr of the system, during the gcncration of discrete primitive observations of a continuous world. Unfortunately, it is impossible to dctcrrninc what should bc considcrcd significant without considering the cotzfexf within which the judgcmcnt is to be made. A diffcrcncc of ten feet, for cxamplc, is much larger in the context of “Distance I am from the ground” than in the context of “IXstancc I am from the moon.” A context provides a means of classifying difTcrcnces into fuzzy classes such as “about the same”, “somewhat larger”, and so on. ‘lhc number of classes will vary depending on personal taste as well as the dcgrcc of rcfincmcnt of the knowlcdgc base. Our assumption of six classes (“not significantly”, “sIighLly”, “somewhat”, “much”, “very much”, and “vastly”) seems to work quite well. s A\ cvidcnccd by the child who responds to crcry cx~k~na~m with “Why?” In terms of the QBKG structure, the context of a concept is the set of more general concepts of which it is a part. Some of the more primitive conccpls, such ds l!iyPipcount, appear III scvclA places in the knoulcdgc structure and can thcrcfore be judged in sc\craI different contexts. To judge a diffcrcncc in context. it is necessary to dctcrmine how that ditFcrencc affects the value at the top of the knowledge structure. Given the sign and magnitude of a difference which is to be considcrcd a significant improvcmcnt for Hcur, this xyodtress m&c can be propatgatcd down through the tree to dctcrminc how much better or worse one move is than another with respect to a given concept in a given position. In Figure I, for cxamplc, if the goodness metric for Heur is assumed to be + 10, and in a given position ‘I’actWgt equaled 3. then the goodness metric ftir Edgcl’rimc would bc +10/3, dnd a 6Edgcl’rirnc of Icbs than 10/3 would bc juclgcJ “about the same”, a GEdgcPrimc between 10/3 and 20/3 would bc “somewhat better”, and so on. This procedure requires an n priori goodness metric for Heur. In QBKG, probably the most satisfying overall context would be “What is the expcctcd value of the game?” with a goodness metric of perhaps a tenth of a point. Such an evaluation function could bc built, and in fact the system has an indcpcndcnt computation used to approximate the expcctcd value, which is used in making doubling dccisions.6 The original l3KG evaluation function only nccdcd to order the possible moves with respect to a given initial position, and this “relative” nature remained through the translation to the QBKG-style evaluation function, so Iicur values resulting from diflcrcnt initial positions are not directly compsrablc. With rcspcct to a given position, however, various heuristics have been dcviscd which empirically give satisfactory results in the dctcrmination of significant dif-fcrcnces. 6 Ihc a]?proxmxHion or lhc cxpcclcd value is km crude to provide sdcquak? dhcrm~~nnl~o~~ hclwccn mdikldual n~ovcs. howovcr, thu:, IYKJ~IV~I~~ the “two funclion” .schcmc u\cd III IMG. 215 The QBKG system is now able to produce cogent commentary on about 70% of positions randomly presented to it. Its principal flaws at this point are idiosyncrasies in the knowledge base due largely to historical reasons. For example, the system is unable to comment on the relative merits of two moves where one move separates the opposing armies (creating a non-interfering race to the finish of the game) and the other move dots not. An example of its ability is shown in Figure 2, taken from a set of problems by Holland [8]. QBKG has chosen 17-24 as its move, and the user has asked for a comparison with 12-18,17-18. Holland comments on this position, “The correct play is to move one man from [the 17 to the 24 point], hitting Black’s blot. You must try to prcvcnt Black from cslltblishing [the 24 point]. If you wcrc to make [the 18 point] in lieu of hitting, Black would have 11 chances out of 36 to roll a 1, giving him a position from which hc will still bc able to win the game.” (p. 66, paraphrased into QBKG’s notation.) Figure 3 shows QBKG’s commentary on this choice. Part (1) is some gcncral comments about the situation. based on PiplXffcrcncc and the indcpcndcnt cxpccted value computation. Part (2) is QBKG’s opinion on the worth of the two moves, based on GHcur and some knowlcdgc about the range of possible Hcurs in this position. Part (3) is the result of the focusing mechanism discussed above and shows the cxtrcmc importance of hitting the lone Black man. ‘l‘hc crucial issue of stopping Black from ' * 3 4 5 6 Black 7 8 9 10 11 12 24 23 22 21 20 19 White 18 17 16 15 14 13 F’igurc 2: Sample position, White to play 6,l (1)In the given position, White is far ahead in the race, and has a winning advantage, with substantial gammon chances. (2)The actual move, 17-23,23-24(Move l), is much better than the suggested move, 12-18,17-18(Move 2). (3)There is nothing to recommend Move 2.The advantages of Move 1 are: o vastly better chances of keeping Black from making an advanced point [l]. o very much better attack by White [Z]. making an advanced point (AdvPoint in Figure 1) is discovered and rcportcd. while irrclcvant diffcrcnccs bctwccn the move& such as the added risk that White may be hit (11 chances for Move 1 vs 1 for Move 2). are ignored. The brackctcd numbers in part (3) arc reference numbers by which the user may request further commentary on the specified topics. ‘l’he system responds to such requests by recursively cntcring the focusing system using the sclectcd topic as the root of the search, in the same manner as it handles topics which are denoted collections. Conclusions WC view discrctization as a simplifying assumption that bccomcs less and less workable as AI systems begin to tackle real-world tasks, with the accompanying problems of noise, uncertainty, and shifting notions of what is true and what is rclcvant. Using a fine-graincd rcprcscntation, hierarchical knowlcdgc structuring and the context- sensitivity provided by application cocfficicnts, knowlcdgc-intensive evaluation functions of the foml used in QBKG provide a means of avoiding the difficulties introduced by an cxccssi\cly discr-ctc bicw of the world. \vhile still providin g the bcncfits of explainability and uniformity of rcprcscntation which arc dcmonstratcd advantages for an artificial intclligcncc system. References 1. Acklcy, 1). H. SC Bcrlincr, H. J. The QBKG System. Carnegie- Mellon University, 1982. in preparation 2. Anderson. J. R. A.C.T. Harvard University Press, 1983. (forthcoming) 3. Berliner, H. J. “Backgammon Program beats World Champion.” Arttzcial Itzlclligctzce 14 (1980). 4. Bobrow, D. G. & Winograd; T. “An Ovcrvicw of KRL.” Cognirive scfkizce I, 1 (1977). 5. Collins, A. M. & Loftus, E.F. “A spreading activation theory of semantic processing.” PsychologicaZ Review 82 (1975). 6. Davis, R., Buchanan. B., Shortliffc, E. “Production Rules as a licprescntation for a Knowlcdgc-Based Consultation Program.” Arlificial Itzlelligetzce 8 (1977). 7. Fahlman, S. E. NETI.: a system for represenling and using real- world ktzowledge. MT'I' Press, 1979. 8. Holland, T. Belter Backgatnnzorz. Rciss Games, Inc., New York, 1974. 9. Ncwcll, A. & Simon, H. A. Human Probletn Solving, Prentice-Hall, Englcwood Cliffs, N. J., 1972, IO. Quillian, M. R. Semantic memory. In Setnanlic Ztzfirrnalion Processitzg. Minsky, M., Ed.,MI'TPrcss, 1968. 11. Schank, R. C. & Abelson. K. Scrip& Plans, Goals and Utzderstatrditzg. 1,awrcncc Erlbaum Associates, llillsdalc, N.J.. 1977. 12. Winograd, T. Undmlanding Natural IJanguagc. Academic Press, 1972. Figure 3: QBKG’s commentary on two moves in Figure 2. 216
1982
76
174
PLANT/DS REVISITED : NON-HOMOGENEOUS EVALUATION SCHEMA IN EXPERT SYSTEMS Carl T. Uhrik Department of Computer Science University of Illinois at Urbana-Champaign ABSTRACT This paper describes several deficiencies of PLANT/ds, an expert system for the diagnosis of soybean diseases. This production system employs both human and machine derived rules. The rule application mechanism for both rule groups suffer from improper treatment of incomplete data. The problem is illustrated by several examples, and a solution is proposed. Implementation of a new version of PLANT on the empty expert system ADVISE is seen as the crux of future research. INTRODUCTION Within the expert system PLANT/ds [2] for diagnosis of soybean diseases in Illinois, two rule groups operate simultaneously. One rule set is derived through the painstaken effort of plant pathologists to encode their knowledge in the form of machine executable rules. The other rule set is the product of the automated induction program AQll run on exemplary cases of the diseases [3]. PLANT has enjoyed relatively high popularity among the agronomists exposed to the system, yet the current rules suff.er from a number of maladies. Confusion Matnx Summaming the Disgnaw of 340 Testmg Events Usmg Inductwely Derived Rules Confusion Matnx Summarnrng the Diagnosis of 340 Testmg Events Usmg Expert-Derived Rules From a pragmatic basis, two problems are of concern : 1) A rule may fail to recognize a true instance of a disease (false negative), or 2) A rule all-too-freely declares that some disease other than the correct one is present (false positive). Only the latter of these problems is noticeably present. A typical situation is: "System reports most probable occurrence of disease is A with evidence that B, C, D, and E are present, B being most likely alternative and E being least likely. However, the expert making the inquiry states that disease C is the correct diagnosis and, moreover, that B is completely erroneous." The "confusion" matrix, Fig. 1, summarizing some 340 test cases, shows this problem rather dramatically. The fact that the diagonal entries of the matrix are almost solidly 100% for both human and machine rules attests to the skill of the knowledge engineer and prowess of Aqll. Conversely, relatively high numbers in the off- diagonal entries of the table evince the severity of problem (2). This syndrome, loosely labelled Assigned densmn Test cases Dl D2 D3 D4 DS D6 Dl D8 D9 DlO Dll D12 D13 D14 DIS correct Test dragnosas casts Assigned deaswn Dl D2 D3 D4 DS D6 D7 D.S D9 DlO Dll D12 D13 D14 DlS Dqonhc stem canker (Dl) 10 loo 100 70 Charcoal ro, CD21 10 100 Rhrzocronro root mr CD31 10 100 100 Phyrophrhom mm m, (D4) 48 100 Brown srcm ror CDS) 24 8 100 4 8 8 Powdery mrldew (D6) 10 100 Downy mrldcw (D7) 10 100 90 30 90 100 Seprona brown spa 031 52 10 100 88 100 100 Bacrenal blrghr (D9) 10 100 so 10 80 80 Bacanni pustule (DlO) 10 20 100 10 30 Purple seed smm (Dll) 10 40 1n 100 10 60 60 Anrhracnose (Dl2) 24 a 100 4 PhylIosncro leofrpor CD131 10 100 100 100 90 Airemu Lo f spat (D14) 51 100 22 100 100 Frog t-ye kaf spot (DlS) 51 2 100 82 4 100 100 Dmporthr s,cm canker (Dl) Charcoal rot iD2) Rhrzocroma mar IO, (D3) Phytophrhora root rot (D4) Brown srem ro, (D5) Powdery mddew (D6) Downy mddcw ID71 24 87 4 10 10 80 100 30 30 70 30 100 40 Scprona bmwn spa iD8) Barrenal bbghr (D9) Boctenal pustule (D10) Purple seed sfam (Dll) Anrhracnose P12) Phvllosncm leaf spa D13) A~wrnna leaf spot iD14) Frog eye leaf spot ID151 52 37 40 100 38 37 90 44 100 10 50 100 90 30 10 10 70 so 100 30 30 20 10 10 20 10 10 80 60 30 24 50 4 4 54 96 10 20 100 50 90 80 70 51 39 100 20 8 94 69 51 4 39 63 100 4 6 100 100 Fig. 1 - Confusion Matrix for Two Rule Sets i21 f This work is supported in part by U.S. Dept. of Agriculture Grant No. 321512344 and in part by the Office of Naval Research Grant No. N00014-82-K-0186. 217 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. as "confusion," can be further conceptualized as: a) assigning diagnoses to diseases on the basis of evidence which is too weak, or b) assigning diagnoses to too many diseases at once. Although the two cases overlap to a large degree, there is some distinction. Case a) amounts to "jumping the gun" due to excessive sensitivity in evaluation; case b) involves a lack of proper discrimination/filtering between the diseases either by the intrinsic nature of the disease or lack of proper granularity in the rules. Assuming a solution, in the first case we simply adjust the evaluation parameters, but case b) requires revision of the descriptors and/or rules. PARTIAL SATISFACTION OF RULES -~ A rule is said to be satisfied when some of the user input data matches all of the selectors in the rule. The basic unit of a rule is a selector which tests whether a selector's variable takes a value from a reference set defined by the selector. Further, a weight or confidence can be assigned as the value of the selector, contingent upon the value of the selector variable. The concept of a selector is fairly intuitive, but for a complete discussion refer to [4]. Selectors combine via an AND function to produce terms which further combine under an OR function to produce the left hand sides of rules. Thus, a rule [x2=31[x3 =1,31 ::> [decision=A] can be interpreted: v [x4(41 Decision A is taken if x2 is 3 and x 3 is 1 or 3, or if x4 is smaller than 4. In a case of perfect knowledge of symptom descriptors, there is little doubt that the rules are more than adequate to discriminate various cases of the diseases. However, sparse information on the state of diseased plants requires efficient use of the little data available. Hence, an evaluation scheme is defined to handle the numerous cases of partial satisfaction of a group of selectors in a rule. Such a scheme includes the semantics of the AND and OR functions (es., MIN/MAX) and the subsequent assignment of confidences to right-hand sides of rules. Fig. 2 describes the particular differences in the structure and evaluation schema between the PLANT expert rules and induction rules. The greatest confusion occurs either among the various leaf spot diseases or between the 3 leaf spot diseases and another disease. Since the leaf spot diseases have the longest rules (number of terms), one might expect a high potential for confusion in the case of partial satisfaction. Consider the following rules for downy mildew and brown spot: [leafspots halos=p][seed mold growth=p][stem=n] [leaf mildew growth=on lower leaf surface] ::> [diagnosis=downy mildew] and [precipitation)n][# yrs crop repeated)11 [damaged area = not whole field][roots=n] [leaves=abn][leafspots halos=no yellow halos] [leafspots water soaked margin=abs] [leafspot size>l/8"][leaf malformation=abs] V [precipitation>n][leafspot size>l/8"] [leaves=abn][leafspots halos=no yellow halos] [leafspots watersoaked margin = abs][roots=n] V [time=apr..jun][damaged area=not whole field] [leaves=abn][leafspot halos=no yellow halos] [leafspots water soaked margin=abs][roots=n] ::> [diagnosis=brown spot] . Both of these rules adequately cover the standard textbook cases of the two diseases. However, consider the situation where leafspot halos exist with no yellow halos, roots are normal, leaf mildew growth is observed, water soaked margins are absent, and leaf spot size is greater than l/8". Using this data in the rules causes 3 of 4 selectors to be satisfied in the first rule. Let us assume the evaluation scheme for the INDUCTION RULES of PLANT (Fig.2), giving .5 confidence. Since many of the same selectors occur in the second rule, its confidence level must also be evaluated. The first term satisfies [leaves=abn][leafspots halos=no yellow halos] [roots=n][water margins=p] for confidence of .33 . The second term satifies [leaves=abn][leafspots halos=no yellow halos] [roots=n][leaf spots>l/8"] for a confidence of .33 . The third satisfies this again for .33, which brings the cumulative confidence to .95 . Thus, there is good evidence for downy mildew, but brown spot is indicated as the most probable choice on the basis of this inflated confidence. Evaluation : Selector s : [x i d R : w] 1) A selector is assigned a confidence value of + if satisfied and - if unsatisfied or unknown. Te t-m T : S1S2S3 . . . 2) Confidence of a term is the AND of its selector confidence values. Disjunctive DT : T1 V T2 V T3 . . . 3) Confidence of terms is the OR of their Term constituent term confidence values. Linear Expression qs- DTs + qc' OTC 4) Linear expressions ar ithmetically combine confidence OF dijuctive terms. Note : w'qs'qc, are optional weights found For expert rules : only in EXPERT RULES. ( + = 1, - = 0, AND = mean, OR = Z$AX, # denotes a relational operator, R is a subset of xi's value set. For induction rules : qs=0.8 , qc=0.2 ) (+= 1, -= -1, AND = mean, OR(x,y)=x+y-xy ) Fig. 2 - Structure & Evaluation of 2 Rule Sets Granted, if we select the threshold low enough, we can empirically guarantee for large test sets that the correct diagnosis of the disease will always be indicated, but we are only increasing the confusion between similar diseases when we do this. The problem of soybean diagnosis may only be trivially solved for easiest cases (unless the cases the machine readily discriminates are different than those previously noted by soy pathologists). This leads to a more general question of how commonalities between rules should be treated in the evaluation scheme. Groups of selectors can occur in clusters in more than one term in the same rule. The simultaneous satisfaction of such a group, as [roots=n][leaves=abn][leafspot=no yellow halos] in the above example, allows this one condition to artificially inflate the significance of the rule by raising the confidence value out of proportion. It would seem that these terms should be factored out into a separate rule. Often, this indicates a significant generalization on its own and can be assigned a meaningful name, as with the term common to Brown Spot, Frog Eye Leaf Spot, Alternaria Leaf Spot, and Phyllosticta Leaf Spot : [leaves=abnJ[leafspot halos=no yellowhalos] [leafspots watersoaked margin =abs] [leafspot size>l/8"1 ::> leafspot disease , where leafspot disease can serve within a selector with the proper weight in the contexts of the previous occurrence of the cluster. Several of these factorings of terms have been noted in Fig. 3 for both expert and induction rules. Another problem exists in the homogeneity of the evaluation scheme. The leafspot diseases are simply too similar to assume that the same scheme for distinguishing the most diverse diseases will also distinguish the fine shades of the leaf diseases. There is a need for different evaluation schemes to evaluate different rules (perhaps topology dependent, eg. f(a of terms) ). The average function may serve well as the default [stem=abn] [leaf malformad=abs] ::> stem disens [stem=abn] [leaf=abn] ::> stem b leaf disease [leaves=abn] [leafspot=pres] [leaf spot size>l/S”] [leafspot water soaked margin=abs] ::> leafcondl [roots=norm] [prec=>norm] [leafcondl ] : :> watercondition [leafspot water soaked margin=pres] [leaf spot halos=no yellow halos] : :> leafcond?. [!eaves=abn][leafspot size>l/S”] ::> leafcond3 [leafcondition [stem=n] : :> leafcond4 since it has been empirically proven the best homogenous scheme, but the minimum function, which yields a stricter separation of confidences, suits the leafspot disease rules better. The switching between various levels of rules might be effected through the use of separate rule groups in the ADVISE system. The intersection (AND) of the exclusion sets of the selectors in a rule group may serve to define the left-hand side of a rule for switching rule groups. As soon as the confidence of that rule surpasses all the current active rule group, the new rule group would be activated (and optionally deactivate the other). See Fig. 4 for an illustration of this rule group switching. The probabilistic sum used in the induction evaluation is certainly justified in rules where there are a small number of distinct terms, but the leafspot diseases indicate the flaw with trying to apply the technique too loosely. If there is only marginal evidence in a great many terms, we have suddenly inflated the significance of these cumulatively (in fact they probabily have no significance when considered en masse). Since there is clearly a variant need, dependent on the rule, as to the choice of evaluation scheme, the scheme should be flexible in this respect. CONTEXTS FOR SELECTOR AND SELECTOR GROUPS As stated earlier, one shortcoming seems to be a sparse use of weight functions or weight values within selectors to establish the proper "context" for a selector or group of selectors. This is in fact a tiresome and tedious task accomplished through trial and error. We accomplish this somewhat heuristically in the present system. However, we might imagine an increasing wealth of knowledge being accumulated in an online learning environment for the proper weighting of terms based on an expert's interaction with the system. Now, the particular partial evaluation function for a term might likewise be learned in this manner. Expert Rules leaves=abn] [stem=abn] : :> extensive darr-ge I plant stand<normal] [extensive damage] roots=rotted] [stem cankets=presqnt] canker lesion color=brown,black] ::> k-0-t rot leaves=abn] [leafspot halos=no yellow halrn] prec=>norm] : :> halo svr:drome leaves=dbn] [leafspot halos=no yellow hal.ts] leafspot water soaked margin=abs] leafspot size>l/S”] : :> leafspot Tisease Fig. 3 - Partial Term Factors Occurring in Yultiple Terms/Rules The following is a rule group to distinguish between mildew diseases: [leaves=abn] [leaf malformation=abs] [leaf mildew growth=on upper suface][roots=n] : :> [Diagnosis=Powdery mildew] [leafspots halos=p][stem=n][seed mold growth=p] [leaf mildew growtkon lower leaf surface] : :> [Dingnosis=Downy mildew] The intcrsecti ‘-L art the exclusion sets of the selcc*r-t‘,; Lag t!?e above rules forms *I left hand side of a rule for switching rule groups: [leaves=n] [roots=abn] [seed mold growthzabs] [steFabn] [leafspots halos=abs] : :> [Rule SroupfYiidew Group] Fig. 4 - Formulation of Rule Group Switching Rules DISCUSSION Confusion arises at several levels, but the root of the cause seems to be the assignment of confidence values to the partial evaluations of rules which has been misguidedly forced to be homogenous between rules. Furthermore, expert rules were derived under somewhat ill-defined pretexts. When the human comes up with the rule, he is not aware of the consequences of the machine's partial evaluation of his rule. He is only considering the relevant variables in totality, not the exception driven cases necessary for accurate partial evaluation. There is a violation of knowledge granularity between man & machine since the man assumes (upon the recommendation of the intrepid knowledge engineer) that if he works at the highest level of his experience (i.e., complete knowledge), he can trust the machine to accomplish the task of exception handling (which he has built into his own knowledge base) in a manner similar to that which he himself would have used. In such a manner, results in the machine evaluation scheme appear that are quite different than anticipated. Moreover, since this inherent assumption is relatively easy to make and somewhat obscure, another expert may readily verify the rectitude of the first expert's rules but will fail just the same to see the reason for the rules not stacking up to expectations. Surprisingly, humans possess an uncanny proclivity for exception handling. Children of age 2-7 encounter no difficulty in breaking down their internalized models of language and mores and physical reality to rebuild them in the light of conflicting environments until a model consistent with their surroundings is achieved. We continue this process throughout our adult life, but with a different emphasis and motivation. Exceptions trigger demon processes so subtly that often we are not aware of any exceptional condition having occurred. This is the true power of human thought. It no longer becomes necessary to have complete models when the exception handling is well integrated. Another problem, which is more a fault of the expert than the machine, is that there does not seem to be a well balanced use of descriptors in the expert rules. Since the expert himself devises the descriptors, it is unlikely that the unused ones are irrelevant; they are simply irrelevant to this particular expert. Asking several other experts to augment the rules to try and add supporting cases and exceptions to our existing expert rules is one possible solution. In effect, this integrates multiple sources of knowledge [5] and avoids "inbreeding". Regardless of the approach, diversity in choice of experts will build a stronger system. CONCLUSION The prime deficiency of PLANT is currently its inability to utilize more than one evaluation scheme within a rule group. This does not allow simultaneous assessment of the diagnosis plausibility of several classes of diseases (e.g., the required evaluation scheme for stem diseases may differ from the scheme for leaf spot diseases but within a PLANT rule group only one evaluation scheme is allowed). Given sufficient real world data, this presents no problem to the system, but sparse or incomplete field data mandates optimal use of the user's input. Consequently, non- homogeneous evaluation is proposed to alleviate the problem. Non-homogeneity can be effected by imposing a supervisory control mechanism above the existing evaluation schema, which will ultimately dispatch the proper subgroup of rules once sufficient evidence precludes satisfaction of any other subgroup. Alternatively, a special rule group to accomplish rule group switching might be defined to direct the system to the proper subgroup with its associated evaluation scheme. The current ADVISE [Baskin & Michalski] implementation of PLANT is particularly conducive to this approach. In addition, the facilities in ADVISE, an "empty expert system", allow a wealth of other questions regarding evaluation schema to be quickly and painlessly determined. REFERENCES [II [21 [31 141 151 Chilausky, R. "A Prototype Computer Based Consulting System Using the Variable Valued Logic System VLl : Methodology and Implementation," Thesis, Dept. of Comp. Sci., Univ. of Illinois, Urbana, January, 1979. Chilausky and R. Michalski, u Knowledge Acquisition by Encoding Rules versus Computer Induction from Examples: Case Study Involving Soybean Pathology," International Journal for Man-Machine Studies No.12, 1980, pm.- ---' Larson, J.B. and Michalski, "Selection of Most Representative Training Examples and Incremental Generation of VLl Hypotheses: The Programs ESEL and AQ11," Report No.877, Dept. of Comp.Sci.,Univ. of Ill., Urbana, May, 1978. Michalski, "A Variable-Valued Logic : System VL ," Proc. of the 4th Intl. Symposium on -- Mu ti-Valued 1 Logic, West Virginia Univ., Morgan= May, 1974, pp.323-346. D. Michie, "Expert Systems", The Computer Journal, Vo1.23, No.4, pg.369, January, 1980. 220
1982
77
175
Inheritance of Statistical Properties Neil C. Rowe Department of Computer Science Stanford IJniversity Stanford, CA 94305 Abstract Statistical aggregate properties (e.g. mean, maximum, mode) have not previously been thought to “inherit” between sets. But they do in a weak sense, and a collection of such “weak” information can be combined in a rule-based architecture to get stronger information. I I Motivation S~~ppose we have conducted a census of all elephants in the world and we can definitely say that all elephants are gray. Then by set-to-subset inheritance of the “color” property, the set of elephants in Clyde’s herd must be gray, Clyde’s herd being some particular herd of elephants. This will not work for statistical aggregate properties such as maximum and mean. Suppose our census found that the longest elephant in the world is 27 feet long, and the average elephant 15 feet. This does not mean the longest elephant in Clyde’s herd is 27 feet, nor the average in the herd 15 feet. But a weak form of inheritance is present, for we can assign different degrees of Iikelihood to the following: This work is part of the Knowledge Base Management Systems Project, under contract ## N00039-82-G-0250 from the Defense Advanced Research Projects Agency of the United States Department of Defense. The views and conclusions contained in this dcrcument are those of the author and should not be interpreted as representative of the official policies of DARPA or the US Government. I. “The longest elephant in Clyde’s herd is 30 feet long.” 2. “The average elephant in Clyde’s herd is 30 feet long.” 3. “The longest elephant in Clyde’s herd is 27 feet long.” 4. “The average elephant in Clyde’s herd is 27 feet long.” 5. “The longest elephant in Clyde’s herd is 16 feet long.” 6. “The average elephant in Clyde’s herd is 16 feet long.” Statements 1 and 2 are impossible. Statement 3 is possible but a bit unlikely, whereas statement 4 is almost certainly impossible. Statement 5 is surprising and hence apparently unlikely, whereas 6 is quite reasonable. Since we don’t know anything of Clyde’s herd other khan that they are elephants, a kind of inheritance from the properties of elephants in general must be happening. The issue here is more important than elephants. Thousands of databases in existence s~tpport statistical questions about their contents. Exact answers to such questions may be very time-consuming for large data sets and/or remote access. Many users, especially non- statisticians, may be willing instead to accept much faster approximate answers via inheritance methods [5]. 221 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. 2. Our four-characteristic approach We wish to address inheritance of the set properties maximum, mean, standard deviation, median, mode, fits to simple distributions, and correlations between different values of the same item. Our theory concerns set representation only (but sets of cardinality one can represent individuals). It concerns “definitional” sets primarily (those with absolute criteria for membership) as opposed to “natural kind” sets [l] (though degrees of set membership as in fuzzy set theory could be introduced). The theory mainly deals with extensions (exemplars), not intensions (meanings). It also only addresses the set- subset semantic relationship; however, often other relationships can be viewed this way by “atomization” of the included concepts, e.g. geographical containment may be seen as a set-subset relationship between sets of points. The key is to note that while in a few cases statistical properties inherit values exactly from set to set, in most cases they do not; but that there are characterizations of a numeric statistic that will inherit much more often: o an upper bound on its value l a lower bound on its value 8 a best estimate of the value o a standard deviation of possibilities for the value Some examples: e An the upper bound on the maximum of the set. o A lower bound on the maximum the minimum of the set. a subset is of a subset is o A best estimate of the mean of a subset, in the absence of further information, is the mean of the set. l A standard deviation of the mean of a subset is approximately the standard deviation of the set times the square root of the difference of the reciprocals of the subset size and set size. The last also illustrates an important feature of statistical property inheritance, namely that functions (in the mathematical sense) of values may be inherited rather than the values themselves. But since the different values are so strongly coupled it seems fair to still call it “inheritance”. Inheritance of nonnumeric statistics such as mode can analogously be characterized by a best es?imate, a superset guaranteed to include all values, and an estimated relative frequency of the estimate among all possible values. Note this approach is a distinct alternative to often-arguable certainty factors for specifying partial knowledge. 3. Inheritance types There are three “dimensions” of statistical inheritance: what statistic it concerns, which of the four abovementioned manifestations it addresses, and how it basically works. The main categories of the latter are: l Downwards inheritance. That is, from set to subset, as in the examples of the last section. This is the usual direction for statistical inheritance since it is usually the direction of greatest fanout: people tend to store information more for general concepts than specific concepts, for broadest utility. In particular, downwards inheritance from sets to their intersection is very common in human reasoning, much more so than reasoning with unions and complements of sets. e Upwards inheritance. Inheritance from subset to set occurs with set unions, in particular unions of disjoint sets which (a) seem easier for humans to grasp, and (b) have many nice inheritance properties (e.g. the largest elephant is the larger of the largest male and largest female elephants). Sampling, random or otherwise, to estimate characteristics of a population is another form of upwards inheritance, though with the special disadvantage of involving a non- definitional set. Upwards inheritance also arises with caching [4]. People may cache data on some small subsets important to them (like Clyde’s herd) in addition to general- purpose data. Upwards (as well as 222 downwards) inheritance is helpful for dealing with “intermediate” concepts above the cache but below general-purpose knowledge (e.g. the set of elephants on Clyde’s rangelands). o Lateral inheritance. A set can suggest characteristics of sibling sets of the same parent superset [2]. Two examples are set complements (i.e. the set of ail items not in a set, with respect to some universe), and when sibling sets differ only by an independent variable such as time or space, and there are constraints on the rate of change (i.e. derivatives) of numeric attributes between siblings (e.g. the stock market average on successive days). l Diagonal inheritance. An interesting hybrid of downwards and lateral inheritance is possible with statistical properties. Given statistics on the parent and some set of siblings, we can often “subtract” out the effect of the known siblings from the parent to get better estimates on the unknown siblings. For instance, the number of female elephants is the total number of elephants minus the number of male elephants. This also works for moment and extrema statistics. e Intra-concept inheritance. Inheritance can also occur between different statistics on the same set, if certain stalistics are more “basic” than others. For instance, mean can be estimated as the average of maximum and minimum, and thus can be said to “inherit” from them; people may reason this way, as in guessing of the center of a visual object from its contours. But in principle almost any direction is possible with numerical and nonnumerical relaxation techniques. o Value-description-level inheritance. Real- world property values, especially nonnumeric ones, can be grouped at different levels of detail, and inheritance is possible between ievels for the same set and same statistic. For instance, the number of different herds can be estimated from the number of different elephants and general knowledge of how many elephants are in a herd. l Inheritance-rule inheritance. Some sets are sufficiently “special” to have additional inheritance rules for all subsets or supersets. An example is an all-integer set, where for any subset an upper bound on the number of distinct values for that property is the ceiling on the range. 4. Closed-world inferences Since there are many statistics, and even a small set can have many subsets, default reasoning is essential for efficiency with statistical properties. Inferences from the absence of explicit memory information are common in human reasoning [3]; in particular, the idea that “sufficiently important” sets whose statistics are not explicitly noted must be not “unusual” in regard to those statistics. We can define “sufficiently important” and “unusual” relative to what inheritance predicts. 5. A production system architecture So many different kinds of inheritance (even jirst those applicable to the same concept), complicated combination and cascading of different inheritances, inheritance of functions of values rather than values, inheritance- inheritance -- all this classically suggests a production system architecture is needed. That is, the encoding of inheritance categories as production rules. There are two conflict resolution issues for the control structure of such an architecture: which rules to invoke, and how to resolve different answers from different rules. Many different inference paths can be followed in making a statistical estimate, even not including all the possible rearrangements of a set expression involving intersections, unions, and complements. Since these can give different final answers, it’s important to explore as many of these in parallel as possible, unlike most production systems where a single “best” alternative is desired. But some limits to parallelism have to be set for 223 complicated queries, and we are currently investigating “weakest-first ” inference. (Arithmetic must be generalized for operations on intervals.) Combining results from different inference paths is straightforward for numeric statistics. Intersect the ranges to get a cumulative range. Get the cumulative estimate by assuming independence for all estimates, combining as if their errors were characterized by normal distributions via the classical statistical formulas; and the cumulative standard deviation follows directly. Even with nonindependence in the latter calculations the estimate should not be off much, and the standard deviation for the two-path case is never more than 70% (3”‘) of what it should be. 6. An application We are implementing a program that uses these ideas to answer statistical questions for a large database [5]. It uses several hundred rules from a variety of sources: mathematical definitions, extreme-value analysis of definitions, statistical theorems, exploratory data analysis, database dependency theory, statistical database inference security research, psychology of conceptua! classes, and general principles of information systems. As with many other “expert systems” in artificial intelligence, there is more fundamental mathematical theory -- in this case, nonlinear optirnization and cross- entropy rninimization [6] -- that underlies many of the rules, but is too intractable for all but the simplest cases to be of much use. References 1. R. J. Brachman and D. J. Israel. KL-ONE Overview and Philosophy. In Research in Knowledge Represenafion for Natural Language Understanding: Report No. 4785, W. A. Woods, Ed.,Bolt Beranek and Newman, 1981, pp. 5-26. 2. Jaime G. Carbonell. Default Reasoning and Inheritance Mechanisms on Type Hierarchies. Proceedings, Workshop on Data Abstraction, Databases, and Conceptual Modelling, Pingree Park CO, June, 1980, pp. 107-109. 3. Allan Collins. Fragments of a Theory of Human Plausible Reasoning. Proceedings, Second Conference on Theoretical Issues in Natural Language Processing, Urbana !L, July, 1978, pp. 194-201. 4. D. B. Lenat, F. Hayes-Roth, and P. Klahr. Cognitive Economy. Working Paper HPP-79-15, Stanford University Heuristic Programming Project, June, 1979. 5. Neil C. Rowe. Rule-Based Statistical Calculations on a Database Abstract. Proceedings, First LBL Workshop on Statistical Database Management, Menlo Park CA, December, 1981, pp. 163-176. 6. John E. Shore and Rodney W. Johnson. “Properties of Cross-Entropy Minimization.” /EEE Transactions on lnformafion Theory F-27, 4 (July 1981), 472-482. 224
1982
78
176
A NONCLAUSAL CONNECTION-GRAPH RESOLUTION THEOREM-PROVING PROGRAM Mark E. Stickel Artificial Intelligence Center SRI International, Menlo Park, CA 94025 ABSTRACT A new theorem-proving program, combining the use of non- clausal resolution and connection graphs, is described. The use of nonclausal resolution as the inference system eliminates some of the redundancy and unreadability of clause-based systems. The use of a connection graph restricts the search space and facilitates graph searching for efficient deduction. I INTRODUCTION This paper describes some of the theory and features of a nonclausal connection-graph resolution theorem-proving program being developed as a reasoning component of a natural- language-understanding system. The most important characteristics of the program are l Nonclausal resolution is used as the inference system, eliminating some of the redundancy and unreadability of clause-based systems. l A connection graph is used to represent permitted resolu- tion operations, restricting the search space and facilitat- ing the use of graph searching for efficient deduction. l Heuristic search and special logical connectives are used for program control. The following sections will describe these aspects of the program, citing disadvantages and difficulties as well as advantages, and will be followed by a description of the im- plementation status of the program and future plans for it. II NONCLAUSAL RESOLUTION One of the most widely criticized aspects of resolution theorem proving is its use of clause form for wffs. The principal criticisms are a Conversion of a wff to clause form may eliminate prag- matically useful information encoded in the choice of This research was supported by the Defense Advanced Research Projects Agency with the Naval Electronic Systems Command under contract N00039-80-C-0575. The views and conclusions contained in this document are those of the author and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency of the United States Government. logical connectives (e.g., -Pi/Q may suggest case analysis while the logically equivalent P > Q may suggest chain- ing). l Use of clause form may result in a large number of clauses being needed to represent a wff, as well as in substantial redundancy in the resolution search space. l Clause form is difficult to read and not human-oriented. The clausal resolution rule can be easily extended to general quantifier-free wffs [12,8]. Proofs of soundness and _ _ completeness are in (121. Where clausal resolution resolves on clauses containing complementary literals, nonclausal resolu- tion resolves on general quantifier-free wffs containing atomic wffs (atoms) occurring with opposite polarity, which is deter- mined by the parity of the number of explicit or implicit nega- tions in whose scope the atom appears (positive polarity if even, negative polarity if odd). In clausal resolution, resolved- on liter& are deleted and remaining literals disjoined to form the resolvent. In nonclausal resolution, all occurrences of the resolved-on atom are replaced by F=false (T=true) in the wff in which it occurs positively (negatively). The resulting wffs are disjoined and simplified by truth-functional reductions that eliminate embedded occurrences of T and F and optionally perform simplifications such as A A -A 3 F. Definition 1. If A and B are ground wffs and C is an atom occurring positively in A and negatively in 8, then the result of simplifying A(C - F) V B(C + T), where X(y + 2) is the result of replacing every occurrence of y in X by Z, is a ground nonclausal resolvent of A and 8. It is clear that nonclausal resolution reduces to clausal resolution when the wffs are restricted to be clauses. In the general case, however, nonclausal resolution has some novel characteristics as compared with clausal resolution. It is pos- sible to derive more than one resolvent from the same pair of wffs, even resolving on the same atom, if the atom occurs both positively and negatively in both wffs (e.g., atoms within the scope of an equivalence occur both positively and negatively). Likewise, it is possible to resolve a wff against itself. The ground nonclausal resolution rule can be lifted to nonground wffs by renaming parent wffs apart and unifying sets of atoms from each parent, one atom of each set occurring positively in the first wff and negatively in the second. As with clausal resolution, only single atoms need be resolved upon if the resolution operation is augmented by a factorization operation that derives a new wff by instantiating a wff by a most general unifier of two or more distinct atoms occurring in the wff (regardless of polarity). . From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. A nonclausal resolution derivation of F from a set of wffs demonstrates the unsatisfiability of the set of wffs. Nonclaus- al resolution is thus, like clausal resolution, a refutation proce- dure. Variants of the procedure that attempt to affirm rather than refute a wff are possible (e.g., see the variety of resolution rules in [g]), but are isomorphic to this procedure. Although clause form is often criticized, use of non- clausal form has the disadvantage that most operations on non- clausal form are more complex than the same operations on clause form. The result of-a nonclausal resolution operation is less predictable than the result of a clausal resolution opera- tion. Clauses can be represented as lists of literals; sublists are appended to form the resolvent. Pointers can be used to share lists of literals between parent and resolvent [5]. With many simplifications such as A A T - A and A A -A - F being applied during the formation of a nonclausal resolvent, the appearance of a resolvent may differ substantially from its parents, making st,ructure sharing more difficult. For most forms of clausal resolution, an atom does not occur more than once in a clause. In nonclausal resolution, an atom may occur any number of times, with possibly differing polarity. In clausal resolution, every literal in the clause must be resolved upon for the clause to participate in a refutation. Thus if a clause contains a literal that is pure (cannot be resolved with a literal in any other clause), the clause can be deleted. This is not the case with nonclausal resolution; not all atom occurrences are essential in the sense that they must be resolved upon to participate in a refutation. For example, { P A Q, -Q } is a minimally inconsistent set of wffs, one of which contains the pure atom P. A more complicated definition of purity involving this notion of essential occurrences must be used. The subsumption operation must also be redefined for nonclausal resolution to take account of such facts as the subsumption of A by A A B as well as the clausal subsumption of AvB by A. It may be difficult to recognize complementary wffs. For example, PV Q occurs positively in Q V RV P and -P > Q- The effect of resolving a pair of wffs on nonatomic subwffs can be achieved by multiple resolution operations on at- oms. Resolution on both atomic and nonatomic subwffs could result in redundant derivations. A connection-graph procedure would be complicated by the need to attach links to logical subwffs (e.g., P V Q in Q V R V P) and link inheritance would be further complicated since subwffs of a resolvent may have no parent subwffs (e.g., when PVQ and -PVR arc resolved, the resolvent QVR is a subwff of neither parent). Similar complications arise if equality inferences are used that introduce new structure into the result. adequate matching Although the nonclausal resolution rule in general seems as compared with the above proposed extension to on nonatomic subwffs, the handling of the equivalence relation in [l2] is inadequate. In resolving P = & and (P A R) V (+ A S), t i is possible to derive Q V S and -Q V R, but not the more natural result of simply replacing P by Q. It is questionable whether handling the equivalence relation in nonclausal resolution without further extension is worthwhile in comparison with the representational advantages of negation normal form used in [l,Z]. Another difficulty with the equiv- alence relation is that it sometimes needs to be removed during Skolemization. [9] provides extensions to nonclausal resolution that defer Skolemization and permit equivalence relations to be retained longer. III CONIVECTION GRAPHS has the Connection-graph resolution following advantages: was uced in [7]. It The connection-graph refinement is quite restrictive. Many resolution operations permitted by other resolution pr+ cedures are not permitted by connection-graph resolu- tion. The links associated with each wff function partially as indexing of the wffs. Effort is not wasted in the theorem prover examining the entire set of wffs for wffs that can be resolved against newly derived wffs. Links can be traversed by a graph-searching algorithm whereby each link traversal denotes a resolution opera- tion. This can be done to plan a deduction without ac- tually constructing it. This-graph searching may resemble the searching performed for deduction in knowledge rep resentation languages. Connection-graph resolution is extended in a natural way to use the nonclausal resolution inference rule. A connection graph is a set of wffs and a set of links that connect atoms occurring with positive polarity in one wff and negative polarity in the same or another wff. Performing the nonclausal resolution operation indicated by the link results in the production of a new connection graph with the resolved upon link eliminated and the nonclausal resolvent added. Roughly speaking, atoms of the nonclausal resolvent are linked only to atoms to which atoms of the parent wffs were linked. . Definition 2. Let S be a set of ground wffs. Let L be atom C occurs positively in A and negatively in B }. Then (S, L) is the full connection graph for S. Definition 3. Let S be a set of ground wffs and L be its connection graph. Let e = (C, &l, B) be an element of L and C be the nonclausal resolvent A(C + F) V B(C + T). Let S’ be sU{c}. Let L’be 230 f--W Uw,w I t a om E occurs positively in C and (E,A,D)E L or(E,B,D)E L} uWJ,C) I t a om E occurs negatively in C and (E,D,A)E Lor(E,D,B)E f} u{ (E, C, C> I t a om E occurs positively and negatively in C and (E,A,A)Ef,(E,B,B)Ef,(E,~,B)Ef,o~(E,B,A)Ef} Then the connection graph (S’, f ‘) is derived from (S, f) by ground nonclausal connection-graph resolution. A nonclausal connection-graph resolution refutation of an input set of wffs is a derivation of a set of wffs includ- ing F by nonclausal connection-graph resolution from the full connection graph of the input set of wffs. Ground nonclausal connection-graph resolution can be extended to the nonground case by including in the links the unifier of the atoms they connect, keeping wffs renamed apart, and by including links between variants of the same wff (to allow a wff to directly or indirectly resolve against a variant of itself). Factorization must also be included. Either factors with appropriately inherited links must be added for each wff in the connection graph or special factor links can be used with link inheritance rules for both resolve and factor links after resolution and factorization operations. The nonclausal connection-graph resolution procedure is sound and there is reason to believe it is complete. However, it has not yet been proved to be complete, and the history of proving completeness of connection-graph procedures for the simpler clausal case (see [2]) suggests it may be difficult. One reason it is difficult to prove the completeness of the connection-graph procedure is that the link inheritance rules exclude some links that would be present if the connec- tion graph were merely an encoding of all permitted resolution operations for ordinary resolution. Exactly which links are ex- cluded depends on the order in which resolution operations are performed. The effect of connection-graph resolution is to im- pose the following restriction: if a pair of atoms in a pair of wffs is resolved upon, atoms derived (in later resolution operations) from the resolved-on atoms cannot be resolved against each other. For example, if a set of wffs includes P V Q and -P V -Q, these two wffs can be resolved upon P and Q-resulting in tautologies that are discarded; after that, neither wff can be resolved with an atom descended from the other, even though doing so would not result in a tautology. Connection-graph resolution procedures can possibly be incomplete by succeeding in finding refutations when links are resolved upon in some orders, but not others. For example, consider the combination of linear resolution and connection- graph resolution for clauses. Each is complete, but the com- bination is not. If linear connection-graph resolution is applied to { P v -Q, 1P V l&, Q } with Q as top clause, depth-first search will find a refutation, but breadth-first search will not. This contrasts with the usual situation in which breadth-first search is “safe”, always guaranteed to find a refutation if there is one. To see that it fails in this case, observe that after P and -P are generated on the first level of breadth-first search, Q and -Q have no links-and thus none of the three input clauses can be further resolved upon to lead to a refutation. P 231 and -P are linked, but cannot be resolved without violating the linear-resolution restriction. A set of assertions in a connection graph can to some extent be regarded and treated as a semantic network-more so than the same set of assertions without the connection graph. For example, the full connection graph for elephant(Clyde) elephant(z) 3 mammal(s) elephant(y) 3 color(y, gray) mammal( 2) 3 animal(z) would contain links between the following pairs of atoms !I. (elephant(Clyde), elephant(z)) &. (elephant(Cldye), elephant(y)) e3. (mammal(z), mammal(z)). Answers to such queries as “What color is Clyde?” and “Is Clyde an animal?” can be found by graph searching with minimal analysis of the assertions, by traversing the links in the connection graph. Such searching can be made more efficient by labeling the links (e.g., iaa for e, and ~5, haacolor for &). The semantic content of the set of assertions is still conveyed by the assertions themselves, but control information is provided to a graph-searching procedure by the link labels. Similar comments could be made regarding any logi- cal representation. However, the use of a connection graph in which a!! permissible remaining resolution operations are en- coded in explicit links can yield greater efficiency by eliminat- ing traversal of multiple paths to the same goal. For example, suppose P3 is resolved upon, resulting in the added assertion elephant(w) 3 animal(w) and the added link t$. (efephant(Clyde), elephant(w)). The link & is deleted. There is still only one path or proof that Clyde is an animal, since the absence of e3 blocks the path or proof elephant(Clyde) - mammal(Clyde) - animal(Clyde). Graph searching in the connection graph to deter- mine taxonomic relations quickly is a simple illustration of the more general notion, extensively explored in [1,13], of using graph searching to determine the existence of refutations. The ideas and techniques developed there are applicable to non- clausal connection-graph resolution. Connection-graph resolu- tion appears to offer the following advantages over these other schemes: l Although graph searching can be done in the connection- graph resolution procedure, [1,13] do not allow for the actual formation of resolvents. If their techniques for graph search were adopted as a device for planning or quick refutation, connection-graph resolution could be regarded as a superset of these other methods. l The actual formation of resolvents and the resulting change in the connection graph are useful for retaining information during a refutation, as well as for conveying information (about usage of wffs, etc.) from one refuta- tion or assertion to the next. (Here it is assumed that the theorem prover is being used with an assertional database to which queries are posed and assertions oc- casionally added and deleted, as opposed to the usual situation in theorem proving in which there is no per- sistent assertional database, all axioms being presented anew for each proof.) l Connection-graph resolution provides a convenient, al- beit unsophisticated, means of interleaving matching com- plementary literals and adding new instances of asser- tions (if more than one ground instance of a wff is re- quired), as compared with the separate processes of search- ing for a mating, and quantifier duplication if the search fails [l]. Of course, the argument in favor of performing only graph searching as in [1,13] is that forming resolvents is expen- sive compared to traversing links, and the cost of creating and storing inherited links may be high. A good system will probably have a mixture of resolu- tion and graph searching, as in (41 for clausal connection-graph resolution. Graph searching is used in that system for look- ahead and to det#ermine if a refutation exists within a certain number of steps. Simple graph searching is used (e.g., not looking for refutations in which wffs occur more than once), with the full complexity and completeness of connection-graph resolution in the background. One problem with graph searching to find refutations is in assessing the effectiveness of the procedure. In ordinary resolution theorem proving, effectiveness can be evaluated in part by examining the number of clauses generated, retained, used in the refutat,ion, and so forth. [4] states “Within this frame of reference it would be easy to design the ‘perfect’ proof procedure: the supervisor and the look-ahead heuristics would find the proof and then guide the system without any unnecessary steps through the search space.” The amount of time used is a good measure for such a program, but should not be used to compare programs as there may be differences in the machines the programs run on and in the efficiency of the programs themselves (as opposed to the algorithms). In general, as [4] states, a measure incorporating both total time and space will be required, adding the further complication of evaluating time-space trade-offs. IV CONTROL A link scheduler is used to specify a refutation search strategy. When an assertion is added by the user, it is linked in the connection graph to all previous assertions. When a resolvent is added, it is linked to the other assertions according to the link inheritance rules. All such added links are examined by the link scheduler. Three outcomes are possible: l The link is deleted. For example, analysis may show that resolving on the link would create a tautology or pure wff that could not be used in a refutation, whereupon the link can be deleted. l The link is retained, but not scheduled. Thus the link can be inherited, but cannot be resolved upon (though its descendants might be). This is done when combining connection-graph resolution with other refinements of resolution, such as ordering restrictions and the special logical-connective restrictions described below. The link is scheduled. It is given a numerical score and placed in the link schedule. The theorem prover operates by repeatedly resolving on the best scored link in the schedule, creating the resolvent, and scheduling the added links. Scheduling of the links is done after all the new links have been added, so that the link scheduler can act on such important facts as the number of links attached to an atom. Special logical connectives can be used to impose restric- tions on the use of particular assertions. As in [lo], the follow- ing connectives denote the following procedural interpretations of A 2 8: l A - 8. If literal A is ever asserted, assert B (forward chaining). l B - A. To prove literal 8, try to prove A (backward chaining). Since a refutation procedure is being used, this is interpreted as “permit the resolution, on literal 8, between A 3 B and any wff having support.” l A * 8. If literal A is ever asserted, also assert B and, to prove yff, try to prove -B . l B + A. To prove literal 8, try to prove A and, if -B is ever asserted, also assert -A. l A > B and -A V 8. Unrest,ricted and equivalent. The use of both nonclausal resolution and these spe- cial logical connectives gives this program some resemblance to natural deduction [3]. It represents an intermediate point between clausal resolution and natural deduction, with ad- vantages of each. It differs from natural deduction, since, for example, a backward-chaining application of A > B to C would result in -A v C(B - T) rather than C(B - A) (with perhaps only a single instance of B replaced, requiring additional opera- tions to replace the other occurrences). The latter expression may be more natural, but the former is more concise because all occurrences of B are eliminated and only a single instance of A is added. Heuristic search is used in a manner similar to the way it is employed in a clausal system [14] and in a natural-deduction system [ 151. V STATUS AND FUTURE PLANS The theorem-proving program is implemented as a 4000 line INTERLISP program and is presently being used as the deduction component of the MICROKLAUS natural- language-understanding system. Natural-language assertions and queries are translated by the DIALOGIC system [6] into logical form [ll]. This logical form is further translated into predicate calculus for input to the theorem prover. The al- lowance for predicate variables extends the program slightly beyond ordinary first-order predicate calculus. Future work will expand the range of logical form handled, as not all logical forms that can be generated by DIALOGIC are presently being translated; the range of logical form generated by DIALOGIC is also being expanded). 232 Besides the unification filtering provided by the con- nection graph, atoms in assertions are indexed by predicate symbol so as to speed the addition to the connection graph of user input assertions when there is a large number of them. Wffs are also indexed by their propositional structure and predi- cate symbols to speed checking for alphabetic-commutative vari- ants to be eliminated. More efficient indexing schemes will probably be tried and variant elimination replaced by subsump tion. Factorization has not yet been implemented in the program. When two wffs are resolved upon a pair of atoms, all atoms instantiated to be the same as the instantiated resolved- on atoms are replaced by F or T, but there is no effort to force additional atoms, by further instantiation, to be the same as the resolved-on atoms. Thus, only “obvious” factors are used. This is incomplete, but effective. Factor links will be added for completeness. So far, only fairly simple evaluation functions have been used in the search control process. They are similar to those used in [14], being weighted sums of the deduction depth of the wff (a measure of the effort required to derive the wff) and the number of atoms in the wff (a measure of the additional effort that will be needed to complete a refutation using the wff). Performance is generally superior to that in [14). In or- dering restrictions, atoms are also evaluated according to how many links are connected to them, so that atoms with fewer links can be resolved upon preferentially. Not only is the im- mediate branching factor reduced, but there is also the prospect that the other atoms with more links will be instantiated and inherit fewer links when the resolution operation is performed. Interestingly, as was also noted in [14], there can be nega- tive interactions among individually good ideas on search con- trol. For example, a strong length-preference strategy and the strategy of resolving on an atom with the fewest links are some- what inconsistent. When there are many assertions about some predicate-some short and specific, others long and general- the atom with the fewest links is likely to be linked only to long and general assertions. Resolving on it thus may result in long resolvents that would be given low preference by a strong- length preference strategy. More work will be done on develop- ing good evaluation functions. The most important extension of the program will be further development of connection-graph searching-both to provide input to evaluation functions and to perform entire deductions without creating any resolvents. REFERENCES PI Andrews, P.B. Theorem proving via general matings. J. ACM 28, 2 (April 1981), 193-214. PI Bibel, W. On matrices with connections. J. ACM 28, 4 (October 1981) 633-645. PI Bledsoe, W.W. Non-resolution theorem proving. Arti- ficial Intelligence 9, 1 (August 1977), l-35. PI Blasius, K., Eisinger, N., Siekmann, J., Smolka, G., Herold, A., and Walther, C. The Markgraf Karl refuta- tion procedure (Fall 1981). Proc. Seventh International Joint Conference on Artificial Intelligence, Vancouver, B.C., August 1981, 511-518. PI PI 171 I81 PI I*01 WI WI P31 PI WI Boyer, R.S. and Moore, J S. The sharing of struc- ture in theorem-proving programs. In Meltzer, B. and Michie, D. (eds.). Machine Intelligence 7. Edinburgh University Press, 1972. Grosz, B. Research on natural-language processing at SRI. Sigart Newsletter #79 (January 1982), 87-94. Kowalski, R. A proof procedure using connection graphs. J. ACM &‘Z’, 4 (October 1975) 572-595. Manna, Z. and Llraldinger, R. A deductive approach to program synthesis. ACM Transactions on Program- ming Languages and Systems 2, 1 (January 1980), 90- 121. Manna, Z. and Waldinger, R. Special relations in pro- gram-synthetic deduction. Technical Note 260, SRI Artificial Intelligence Center, March 1982. Moore, R.C. Reasoning about knowledge and action. Technical Not,e 191, SRI Artificial Intelligence Center, October 1980. Moore, R.C. Problems in logical form. Proceedings of the 19th Annual Meeting of the Association for Com- put ational Linguistics, Stanford, June 1981. Murray, N.V. Completely non-clausal theorem proving. Artificial Intelligence 18, 1 (January 1982) 67-85. Sickel, S. A search technique for clause interconnec- tivity graphs. IEEE Transactions of Computers C-25, 8 (August 1976) 823-835. Stickel, M.E. The programmable strategy theorem prover: an implementation of the linear MESON procedure. Technical Report, Carnegie-Mellon University Computer Science Department, June 1974. Tyson, WM. APRVR: A Priority-Ordered Agenda The- orem Prover. Ph.D. Dissertation, University of Texas at Austin, 1981. 233
1982
79
177
Plan Recognition Strategies in Student Modeling: Prediction and Description Bob London and William J. Clancey Heuristic Programming hoject. Computer Science Department Stanfkd University, Stmford CA 94305 Abstract This paper describes the student modeler of the GUlDON2 tutor, which understands plans by a dual search strategy. It first produces mulriplc predictions of student behavior by a model-driven simulation of the cxpcrt. Focused, data-driven searches then explain incongruities. By supplcmcnting each other, thcsc methods lead to an efficient and robust plan undcrstandcr for a complex domain. 1. Basic problem: Modeling strategic problem solving Diagnostic problem-solving requires domain knowledge and a plan for applying that kriowledge to the problem. A hypothcsis- dircctcd diagnostic plan is a rationale for focusing on diagnoses (partial solutions) and for gathering data to solve the problem. ‘I’hc plan is thus a strategy for sclccting and ordering the application of domain knowledge. Teaching diagnosis invol*;cs recognizing the intent behind a stuclcr!t’s behavior, so that missing knowledge can bc distinguished from inappropriate strategies. ‘I’he teacher inferprefs behavior, cri!iqztcs it, and provides advice about other Llpproachcs. To do this successfully and cfficicncly in a complex domain, the tcachcr benefits from mltltiplc, complcmcntary modcling strategies. GUIDON2 is a tutoring program that uses the cast m&hod A top-down, model-driven search works well in an arc‘1 where approach to teach medical diagnosis [5]. The system divides this task the number of plausible solutions is sm:lll, and the cost of computing among three components: an “expert.” a student modeler, and an them is m,ln;~gcablc. In the si’ADt:-O advisor for designing simple instructional manager (see Figure l-l). Its expel t component, programs [!I], Miller could USC a nal.rvw-branching, context-free NI:OMYCIN 141, scparatcly and explicitly rcprcscnts knowledge about “problem-solbing grammar” to recognize next steps. Medical the medical domain and the rjotnnin-intiel,etrdf?tIl strategies of diagnosis dots not gcncrally fit this rcquircmmt. ‘L’racking down a diagnostis. The student modclcr, a subprogram called I.UAGI:, single solution can be very cxpclGvc, and many possible answers may intcrprcts the student’s behavior by using KliOMYCIN’s knowlcdgc, exist. Howc\cr. if the model of expcrtisc offers a way to rank-order cvaluatcs the student’s skill, and produces alternatives. The strategic decisions, then it cm bc used by a top-down search to suggest instructional module of GUIDON2 will then apply discourse and some range of solutions. Problems include: how to apply the model, teaching strategies in deciding whcthcr to interrogate or advise the how far to go in tracing a possible answer, and how many such student. soiutions to gcncratc. Strategx a domatn knowledge l Slmulatlon of experi Munlple predlcllons Descnpttve heurstlcs ExplarMon ‘\ Student model / I:igurc I-f: Components of the GUIDON2 teaching system A model of student strategies in medical diagnosis must disambiguatc the possible purposes and knowlcdgc underlying the student’s actions. ‘L’hc approaches followed by other plan rccognizcrs and student modclcrs arc not sufficient here because: 1. the complex domain makes thorough szarchcs impractical, whcthcr top-down or bottom-up; 2. WC arc not modeling only facrs and lulcs t~scd in isolation. but also the proccdurcs for applying them; 3. every one of the student’s actions must bc monitored in case the tcnching module dccidcs to interrupt; 4. his behavior must be evaluated and not just explained; and 5. wc might not hove any explicit goal sLtcrrt2nts from the student, so WC cxpcct to rely only on his queries for problcln data as evidcncc for his thinking. 335 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. A bottom-up, data-driven search is best in domains where it is easy to recognize the reasons underlying a solution. Hut in medical diagnosis, an “upward” search often lends to cxcessivc combinatorics. If the student asks how long the patient’s headache has lasted, NEOMYCIN links could show that he is testing a hypothesis of viral or bacterial meningitis, or hemorrhage, hcmatoma, migraine, etc. Or his diagnosis might bc more inclusive (meningitis or vascular disorders in gcnernl). He might not bc tcscing any specific diagnoses, but routinely following up rcccnt data, or exploring for new hypothcscs. Even if his focus can be spccificd, WC would still have to surmise his overall purposes by searching for patterns in his previous actions: we must account for his platmhg -- not only his knowledge -- if WC arc going to teach proccdurcs of diagnosis. Other student modelers and plan rccognizcrs have offered useful tools and have also shown why particular features prcvcnt their direct application for teaching stratcgics of medical diagnosis. If a student modeler is infrequently invoked (say when the user explicitly asks for help) then a thorough multi-technique, multi-pass search is practical. Gencscrcth’s XLACSYVA advisor [6] takes this approach; it also has the uscfs explicit statt’mcnt of his goal as a guide in the starch. The BELIFWII program [lOI predicts the subject’s current plan and updates the plan’s details after observations, but does not judge the appropriatcncss of his behavior. Its single predicted plan, plus data- driven completion of details and repairs, is appropriate in its domain of common-sense actions where there arc few probabls interpretations of any given action; since the prcdictcd plan is unlikely to bc far off, the need for repairs is relatively minor. Another ICAI program, BUGGY [l], succeeds in a forward, data-driven search of a “procedural net” bccausc the domain (children’s subtraction) was complctcly dcscribcd by about 200 rules. The student’s skills could thus be m:,ppcd or “overlayed” onto the procedural net. Goldstein and Carr’s WU;\iPUS coach 12, 71, and the student modclcr for the first version of GUilX>N [IS], also "overlay" estimates of the student’s performance onto semi-indcpcndcnt rules of problem knowlcdgc, in ii primarily data-driven way. The IXIAGE student modclcr uscs two SCpi\ratC but complcmcntary apporaches to infer and cvnluatc the student’s plan, under our rcquircmcnts as listed above. It first forms a model-driven range of pwlic~iotzs, then data-driven descrip/iotzs about the student’s behavior. (IMAGE also takes two firrthcr steps: describing the student’s lcvcl of domain knowledge, and evaluating the success of the student model itself; these are discussed clscwhcrc [S].) Figure 1-2: Phases of the modeler 2. Understanding strategies: An example In a test case, the tutor presented a patient who complained of headaches. vomiting, and bouts of irrationality and apathy. The student then asked several questions about the patient’s h&ache, and whether hc had been irritable or dcprcsscd (both negative). At this point IMAGE, using NEOMYCIN’S knowledge, infers that the student’s active hypothcscs arc brain pressure and tension headache; and that his current task (purpose) is trying to set up a group of hypotheses to consider. (Rcfcr to Figure 2-2 for IMAGI~‘S resolution of these actions into a global plan.) Since IMAGE bclievcs the student is not yet focusing on any one diagnosis, it predicts that he will continue to ask questions which follow up previous data, rather than focus on hypotheses. (We will see below how the predictions arc gcncrated.) At this point, the data cxpcctancies are: Stiff neck; Fcvcr: Prccipitnting factors of hcadachcs; Abruptness of hcadachc onset; Apathetic; Confused; Amnesiac: Dysphasic; Aphasic ‘I’hcse rcprcscnt the choices NEOMYCIN would favor, in the partial order given. However, the student’s next query is: “Does the patient show focal neurological signs?“‘ This dots not match tile expectancies: the model-driven phase has not cxplaincd the student’s thinking here. So IMAGE begins r? rule-based, bo~otn-up search strafcgl? to understand his behavior. One focused starch is guided by the rule shown in Figure 2-l. This search succeeds in finding a new diagnosis (brain mass-lesion) which is a refinement of an active hypothesis (brain pressure) and is also related to the query (focal signs). Thus IMAGE assmiles the student is “refining” (specifying) his diagnosis. Rule-20: Refined hypothesis IF some untested hypotheses that are closely relevant to student's data query are related as causal or "taxonomic" descendents of any members of his set of active hypotheses, THEN assume he is "Refining" an active hypothesis: if it can be pinpointed to one hypothesis, then consider "refining" that node in the student-model ___-_________-___-__------- Figure 2-l: Example of a rule for Descriptive phase Explaining the student’s action leads to updating the student model in several ways. Since he seems to be .tcsting a diagnosis, Ihl.\GE infers that the student is no 1onge.r gathering initial data (“Identify-problem” node in Figue 2-2): now hc is trying to focus on a few diagnoses (“Establish-hypothesis-space” node). (Further modcling hcrc includes “overlaying” domain knowledge, and student evaluation [8].) IMAGE now predicts the student’s behavior by a model-driven generalion of tnulliple expectalions, since the student has entered a new stage in his problem solving. I3y simulating NFOMYCIN, IMAGE finds that the expert model’s prcfcrrcd plan would bc to pursue the current focus (brain prcssurc). Its sccoudary choice is to pursue the other active hypothesis. Thcsc preferred plans lcad to the following data cxpcctancics (in order of cvidcntial strength): For hypothesis br:un pressure: Papillcdcma; Enlarged head; Diplopia; Seizures For hypothesis tension hcadachc: Headaches sensitive to emotional disturbances; Headache pressure; Headache throbbing?; Fever (disconfirmatory) As it turns out, the student asks whcthcr the patient has a fever. IMAGE confirms its prediction: it believes that one part of the student’s plan is to test (by mildly disconfirming ebidcnce) the diagnosis of tension headache. The top-down prediction products an immediate, likely explanation. (This step is incorporated into the global plan as the “Test-hypothesis” node in Figure 2-2. It would have been very difficult to pin down the student’s thinking with a bottom-up search starting with “fever” bccausc of the multiple purposes such a datum could serve.) Consult proce~ent’f~rob’em\ \E*tabti,, . Su bsumption- Indirect- * datum W~ChO) query evidence Hypothesis- I (-ache) BRAIN-PRESSURE) h4alaNboT3 Space I ((3 H-bcabotl ((5 ImtabWy pmcma.0) (2 l+Mdacha. duration Roces.0)) mne node WMB-LE.SlON) Explore-and- Refine WlAIN-PRESSURE TENSION-HEADACHE) I Refine-node (SRAIN.PRESSJRE) I ((7 Foc&gm Bran-mass4esmn)) ~iN-HEK(,WQ , - MctuulU383 I Process- Ml0 Hcadbauma -t--w ((8 Febtib h4atarubo73 GoaI.0)) I ((9 T&mum Proces*O)) Figure 2-2: 10 explanations rcsolvcd into a global student plan (Boldface nodes are tasks invoked by M:OXlYCIN mcta-rules; small capitals &note active hypothcscs; numbers are followed by student’s data query; remaining terms refer to kinds of stratcgics and contexts) 3. Prediction and description: Discussion IMAGE’s predictions arc termed “prcscriptivc” bccausc they rcprcscnt the range of plans that the student should bc doing, because they arc what the NEOMYCIN expert would do. They pin down the most likely possibilities of student behavior immediately. If the student’s actions violate the predictions, then slower data-driven processing is required to explain the data. But when the observations match the predictions, student behavior is,quickly cxplaincd. IMAGE generates its predictions by simulalirlg the cxpcrt at key points: applying the domain-intlcpcndcnt tasks and strategic mcta- rules to the infcrrcd context of student thinking. I-lo\ccvcr, it makes several adjustments to increase (1) likelihood of st;cccssful ~cogrririo,~ and evaluatiotz of student behavior, (2) depth of riefrd, and (3) computational eJficio~-~~. Also, WC want (4) a robots/ model: it should perform reasonably well cvcn if the student acts in unusual ways, and it should bc able to recover from its own errors. (SW [Sj for discussion of latter two issues.) To gain the most bcncfit from its predictions in tcnns of ur!dcrsratlding and evafualitlg student behavior, I.MAGI: generates niultiplc c:rpectations. It does not stop at ihe first action that NEOhiYCIS would take; it finds both the set of near-cquivalcnt favorite actions and a range of secondary alternatives. ‘1 his increases the challcc of nlatching the student’s obscrvcd action, and provides a spectrum of behaviors against which his behavior can bc judged. Of what value to the teaching module is a confirm~cl prediction? MAGI: has not simply gcncratcd data cxpxtancics in each prediction, but has kept a tract of the strategies and domain rules used in the simulation of hl~O\IYCIK along the way. I:or cxaniplc, the prediction lending to the cxpcctcd query of “fcvcr” also records the following: Task: Test the focused hypothesis Strategy: Stl,atcgy-rulcCO3: If “triggci” and “enabling” data has already been tried, then consider any other cvidcnce available. Focus: Tension hcadachc Domain rule: Rule156: if patient has a fcvcr, then his hc&che is not a tension-hcadachc [.2 bcliofl This information about the studcut’s behavior is passed to the instructional cxccutive, along wilh arzalugo~~s i:ljb-mafiotl rtboul Ihe o/her yredicliot1s, many of which could bc judgctl mure appropriate at the moment. ‘I’he gcncration of multiple prctlictions, with their traces of strategical decisions, thus add a uorma~ive clcmcnt to the understanding of student plans. Since WC can often identify one strategical choice as prcfcrablc to another, wc can group behavioral predictions by their desirability. This provides a ready basis for advising the student. 337 Recall the cxamplc in Section 2: the student apparently considered brain pressure in his first query, then moved to a new hypothesis. IMACX’S predictions show that the expert would have continued to pursue the more likely hypothesis of brain prcssurc; and if it did pursue tension headache, stronger cvidcnce could have been chosen (such as the role of emotional factors). The teaching module could use one of these altcrnativcs as a basis fdr advising or testing. The rule-b‘jqcd, bottom-up searches are a valuable complement to the top-down predictive phase. Rut they have a disadvantage: since the bottom-up scarchcs are only practical under tightly focusing heuristics, WC cannot get alternative answers. So they give no ready basis for comparison of possibilities, as do the top-down predictions. The only way to evaluate the appropriateness of our bottom-up explanation is by incorporating explicit "buggy" links [l, 111. Buggy rules have not yet been added to the h’l;ol\~~C~iv cxpcrt model. 4. Ccnclusion and current status Preliminary tests of the IMACX student rnodclcr have indicated that the complementary search strntcgics of model-driven predictions and data-driven descriptions yield highly plausible analyses of students’ strategical behavior. The eflciency, detail, and robuslness of the modeling have also satisfied initial demands. Even with occasional unusual queries, IMAW almost ncvcr yields implausible explanations: this is because the (1) bottom-up scar&es arc highly focused, and (2) if an explanation is not confidently believed, only partial results are saved in order to help disambiguatc the next observation. WC are now arranging to run controlled exp&imcnts with medical students and experts, in which we will test built-in methods of localizing inconsistcncics to either the student, the student model, or the cxpcrt model [8]. A few plan understanding programs include a predictive phase (such as 13EI.II:vr~l~, for common-sense plans [lo]). Very few plan understanders generate multiple prcdictians; for many applications this would be inefficient. Multiple predictions arc useful in domains whcrc (1) cithcr the number and cost of likely solution paths (from high-lcvcl stra:cgies down to result) are not very large, or else the paths can be ranked by appropriateness (so that generation of predictions can bc sclcctivc), and (2) recognizing solution paths by observing final data is often combinatoriaily impractical. Medical diagnosis fits this description. ljottom-up searches are not ruled out; in fact, they complcmcnt the model-driven predictions by often cxplnining observations that violate cxpcctations. WC have shown how the multiple prediction strategy can aid plan recognition for teaching medical diagnosis in several ways: depth of &fai in plan recognition; student evcllunfion (using ranked groups of “prcscribcd” behaviors as a stand‘lrd); and complctc alternatives ready to st’rvc as &ice or as a basis for /es(ing the student. With the student’s behavior cxplaincd as strengths and wcakncsscs in problcm- solving s/rtrfcgy (ordered tasks and methods, hypothesis management and focus) as well as in domain-specific kno&~l,ne (hypotheses, rules, and relations), the tutor is then in a position to pinpoint its instruction to the areas in most need of attention. 5. Acknowledgments This research has been supported in part by ARPA and ONR contract NOOO14-79C-0302. Computational rcsourccs have been provided by the SUMEX-AIM f:tcility (NIH grant RR 0078). We wish to thank our (;I!II)c~~\~ collaborators Reed Letsinger, Ijruce Buchanan, and Dr. Timothy neckctt, MD. Special thanks to Jim Bennett, Barbara Hayes-Roth, and Diane Kancrvn for their helpfLi! comments on drafts of this paper. 111 PI 131 [41 PI &I 171 Bl 191 WI 1111 References Brown, J.S. and I~urton, R.R., IXlgnostic models for procedural hugs in basic m,?thomatical skills, Coglzitive Science 2 (1978), 155192. Carr, I% and Goldstein, I., 0vcrlny.s: A theory qfmodellingfar compuler aiiifn instruction, AI memo 406, (MlT AI Lab, 1977). Clancey, W.J., Trcwsfer of rule-bnsed expertise through a luiorial dialogue, Ph.D. ‘I hesis, Stanford University (August 1979). Clanccy, W.J. and Lctsinger,R., iVEOA!YCIN: Reconfiguring a rule-based experl syslenl for uppIicntion to teaching, pp. 829-836, Proceedings of the Seventh IJCAI, (1981). Clancey, W.J., Mefhodology for building on inlelligeni iutoring syslem, Rep. STAN-CS-81-894, (Stanford Univ., 1981). To appear in Methods and Tactics in Cognitive Science, Kintsch, Poison and Miller, editors. Gcncsereth, MR., Automated consullnlion for complex compu/er sq’sfen~s, Ph.D. Thesis, Harvard University (November 1978). Goldstein, I., Developing a compulational r~presenwion for problem solving skills, AI metno 495, (MI’!‘ AI Lab, 1978). London, IL, Plan recogniiion, overiays,.nnd evaluation melhods ii; s/u&r11 modeling, Technical report, Stanford Heuristic Programming Project. in preparation. Miller, ML., A structured planning and debugging cnvironmcct for clcmcntary programming, Znl’ernalional Journal of Alan-Machine Smdies I I (1979), 79-95. Schmidt, C.F.: Plan recogniGon nnd revision: Understanding the observed actions of ntlolher acfor, AI memo CBM-TR-115, (Rutgers Univ., September 1980). Slccman, D.H. and Smith, M.J., Modelling studcut’s problem solving, ArGjZccl Intelligence Journal I6 (1381), 171-187.
1982
8
178
PROOF METHODS IN AN AGENDA-BASED, NATURAL-DEDUCTION THEOREM PROVER Mabry Tyson Artificial Intelligence Center SRI International, Menlo Park, CA 94025 ABSTRACT This note describes several methods of finding proofs used in APRVR, an agenda-based, natural-deduc- tion theorem prover. APRSR retains a complete tree of all pending or completed goals and is able to choose the next goal to be processed from an agenda of pending goals. Through this mechanism some proof methods can be utilized that had been unavailable to an earlier prover that was not agenda-based. One approach allows information discovered in one path in an attempted proof to trigger a case split in another part of the attempted proof (NONLOCAL CASE SPLIT). Another procedure enables better handling of splitting a conjunction (AND- SPLIT) by making it possible to use more information in determining which conjunct should be split off first. I INTRODUCTION APRVR ( [3]) is based upon earlier work by W. W. Bledsoe on his interactive theorem prover, IMPLY (PI). Both P rovers are natural-deduction systems for first-order logic that utilize the concepts of subgoaling, backward chaining, and forward chaining. APRVR’s control structure is flat, choosing goals from an agenda, rather than being recursive, as IMPLY is, proceeding from a goal to its subgoals only or exiting to its parent goal. Using an agenda allows the theorem prover to try briefly several possible paths that might lead to a proof, thereby yielding more information about the paths. The theorem prover can then spend more effort on the path that appears most promising until the proof succeeds or the prover decides that the path is not as attractive as was first thought. APRVR proves theorems in first-order predicate calculus by first applying Skolemization to remove any This research was performed at the University of Texas at Austin and was supported in part by National Science Foundation under Grant MCS 80-11417. Any opinions, findings, and conclusions or recomendations expressed in this publication are those of the author and do not necessarily reflect the views of the National Science Foundation. quantifiers before proving the resultant open formula. The substitutions for free variables required during the proof of a goal (or subgoal) are returned as the value of that proof. If a goal is proved by generating subgoals, the substitutions returned during the proof of one subgoal may be needed for generating the remaining subgoals or confirming that all the subgoals are consistent and can therefore be combined as a proof of the goal. II AND-SPLIT When the conclusion of a goal consists of sever- al conjuncts, the goal can be achieved by splitting it into several subgoals, one for each conjunct. In the propositional case, independent proofs of these subgoals suffice to prove the goal. In first-order predicate logic, the possible occurence of existential variables common to several conjuncts complicates matters, so that independent proofs of the conjuncts cannot be combined into a proof of the goal if the substitutions for the variables are in conflict. For example, in proving WV’(4 A Q(W =+ U’(x) A QMK we can not allow the independent proofs (after Skolem- ization) of and W4 A Q(b) * Q(x) to be combined because of the conflicting substitutions for the common variable x. One method (used in IMPLY) to avoid generating the two independent but conflicting proofs is to first find one of them and then apply the indicated substitution to the remaining conjunct before proving it. In most cases, a proof, if indeed any exists, of the remaining goal will not cause a conflict, thus allowing the proof of the original goal to be completed. In the example above, the second goal would become J’(u) A Q(b) * Q(a), which is obviously unprovable. 225 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. This method of choosing one conjunct, proving it first, and then using the indicated substitutions in proving the remaining conjuncts is a sequential AND-split. Proving the conjuncts one at a time is conceptually simple and works efficiently most of the time. Unfortunately this sequential nature leads to problems. Let us consider the case in which the first goal has several proofs with differing substitutions for the common variables. If a theorem prover were to find all the proofs it could possibly find, it might waste time trying to find additional proofs after discovering the one that is really needed. On the other hand, it might also waste time if it stops looking for additional proofs once the first has been found and, using the substitutions returned, proceeds to try the (possibly) impossible remaining goals. The theorem prover is trapped into trying the wrong substitution simply because it had been returned by the first proof found. If the theorem prover had found a different first proof (or finds another one after realizing that the first proof did not lead anywhere), the remaining goals might be easily proved. In a recursive system, once the proof attempt on the first conjunct has exited with the first proof, there is no simple way to continue it. The obvious way to mitigate this problem in an agenda-based theorem prover is to allow the proof of the first conjunct to be continued if the first proof does not lead anywhere. Therefore, if the first proof leads to a trapping substitution, a second proof would be sought. While this is easy to describe, it is difficult to put into practice. Blindly allowing the reactivation of goals that have been proved once - simply because there might have been another satisfying substitution - leads to much wasted effort. It became clear to me in my experimentation that restarting already proved goals usually caused extraneous attempts to reprove goals that did not need a different proof. A less obvious method for reducing the trapping problem also takes advantage of the agenda mechanism. Instead of deciding which will be the first conjunct to be proved, attempt to prove each of the conjuncts individually, treating it as though it were the first. If the first proof found for any of the conjuncts uses the substitutions necessary for the desired solution, that solution will be found without having to restart any already proven goal. However, this technique entails duplication of effort. If a, goal has N conjuncts, N! subgoals could eventually be created. Thus, rather than letting this technique run unchecked, APRVR creates goals for each conjunct, schedules them on the agenda, and also schedules an additional goal on the agenda for subsequent examination of the results of those goals after some effort has been expended on them. If some of the subgoals have then been proved, the associated secondary subgoals (created by applying the returned substitutions to the remaining conjuncts) are allowed to remain active while the attempts to prove the other conjuncts are deactivated. If none of the initial subgoals have been proved, one of the conjuncts is chosen as the first conjunct while the other proof attempts are deactivated. This technique should be valuable wherever some of the conjuncts are critical in assigning substitutions, while others simply verify that the result of applying the substitutions has certain common properties. A simple example in group theory would be a goal whose conclusion is the conjunction x-u = a.2 A x-u = e, where e is the identity element, x a variable, and a a constant. The first conjunct may have many satisfying substitutions (e.g., replacing x with e) while the second one is more critical. With this technique, both conjuncts would be tried; the quick solution for the second conjunct (substituting a -’ for x) would be found and applied to the first conjunct, thereby leading readily to a proof. III NONLOCAL CASE SPLIT A very interesting use of the agenda is the nonlocal generation of case splits. In attempting to prove a theorem, people often “paint themselves into a corner” - that is, they get to a point at which, in some cases, the subgoal in question is false. For the other cases, the subgoal is provable. As an example, a proof of a theorem in field theory might reduce to proving (u.b = uec) + b = c. Two cases exist, a # 0 and a = 0, of which the first is provable but not the second. At this point a person will consider why one case is impossible and will ascertain the reason for this. He will then back up in his proof and try a different proof for that case. In this example, there may be another proof to prove the case where a is 0 (or it may prove that a cannot be 0). In certain situations (such as when triggered by an OR in the hypothesis or lemmas), APRVR will try to prove a goal by doing a case split; for each case, a separate subgoal is generated with that case as an added hypothesis. When starting a case split, APRVR determines the amount of effort it wants to expend on that goal before concluding that it is perhaps not provable. It then reschedules the goal that generated the case split for activation when that effort has been completed. If all the cases are proved beforehand, the goal is proved and the value is returned. But, if APRVR does not prove the goal within the allotted time, a different procedure must be attempted (but the present case split is left on the agenda and could still succeed). If some of the cases have been proved and the case split is ground (ie., contains no variables), APRVR will try the case split earlier in the proof in the hope that the failing cases will 226 be provable from a higher point in the tree. (The provable cases can always be proved by the same path as before.) Consequently, APRSR will back up along the path of the proof to an appropriate point, where it will then attempt the case split. As an example, APRSR might decide to split on whether a constant, a, is less than, greater than, or equal to 0. APRVR would create three subgoals and enter them in the agenda along with their parent goal, which is now rescheduled. Suppose the second two subgoals get proved quickly, but the first subgoal, involving u<O, has not been proved before the parent goal again reaches the head of the agenda; APRVR will then look for a higher goal on which to try the case split. The only new proof to generate there is the proof for the case a<O.* APRVR is not as intelligent as a human theorem prover, so it can not make as clever a decision about where to try the case split. At present, APRVR examines the structure of the proof and backs the case split as far up the proof tree as possible until a goal is reached at which backing up any farther would reach a goal that would not necessarily be provable for those cases that were successful at a lower level. This might be just below an AND-split goal if APRJR had been working on the first branch of the AND-split and had yet to do the second branch. The goal stopped at is provable, given the right case, by the path already taken. The parent goal would not necessarily be proved, given that same case (since it is not known yet whether the second branch is provable). Assuming the case split is necessary for a proof, the chosen point may not be optimal - yet not be a bad choice. If the proper point for the case split is below that selected, only a little extra work is required in passing the cases down to where the case split is necessary. If the proper point is actually above where APRVR chooses, then the proofs of the chosen goal for whatever cases do work will be reused when they are requested from above. The advantage of choosing where APRVR chooses, and no higher, is that any higher goal may not be provable for any of the cases. Further analysis of what substitutions are made, as well as where and why some of the cases fail, might lead to a better choice of a goal on which to try the case split. l-V OTHER PROPOSED METHODS There are surely other methods available to a prover with an agenda mechanism like that of APRa. More nonlocal methods might be found in those instances * Although the examples given are for cases involving inequalities, this nonlocal case split mechanism works on any type of case. The backing-up of unproved cases in [l] is somewhat related, but is limited to inequalities. in which information from one part of the proof might affect what is being done elsewhere. Perhaps if a goal were found to be false (possibly by a counterexample), APRVR could trace back up the proof tree to the point at which a false ancestor of this goal had been generated. Other, similar goals might also be purged. APRVR uses demon goals to monitor the progress on AND-splits and case splits. These goals are placed on the agenda so that, when they are chosen, APRVR pauses in its normal attempts at finding a proof and stops to analyze what is happening in the part of the proof the demon goal was watching. These demon goals might be used in other contexts to monitor what APRVR is doing. While the idea of self-monitoring is not unique to agenda mechanisms, it was rather easy to implement by scheduling the demons in the same way as ordinary goals. Since APRVR keeps the entire proof tree, if one goal is found similar (analogous) to another, the successful proof of one could be used as a guide in proving the other and modified wherever the similarity broke down. There are a number of problems to be overcome to accomplish this, but it should be possible. SUMMARY Although I had expected the agenda mechanism’s major source of power to be the ability to make a better choice of paths, I discovered new heuristics that would aid in finding proofs that were not possible in the recursive IMPLY theorem prover. The only drawback of the agenda-based APRVR was a relative weakness in man-machine interaction, which is one of IMPLY’s strengths. This weakness consists of the agenda system’s tendency to change contexts more freely than is done by people. APRVR was capabile of having all the power of IMPLY (although certain extensions to IMPLY were not implemented) - and even more. It proved a number of standard problems given to theorem provers; the most difficult new problem (suggested by W. W. Bledsoe) proved by APRVR comes from part of a proof that a continuous function attains its minimum over a closed region. AM8: vt[L>t =b F(L)gp)] A \ilx[z>L * 3+5x A +)>f’(t)]] A VW gg[f’(g)lF(w) A Vx’[@‘)<f’(w) * g<x’]] * 3uVt’F(u)<F(t’) Although this problem depends heavily on inequal- ities, no special-purpose machinery was incorporated (which might have expedited the proof). APRVR used 227 both the AND-split and nonlocal case split mechanisms presented here in finding the proof without any human intervention. REFERENCES [l] W. W. Bledsoe and M. Tyson, “Typing and Proof by Cases,” in Machine Intelligence 8, D. Michie, ed. (Ellis Horwood Limited, Chichester, Sussex, England, 1977). [2] W. W. Bledsoe and M. Tyson, “The UT Interactive Prover,” ATP-17A, University of Texas at Austin, Austin, Texas (1978). [3] W. M. Tyson, APRVR: A Priority-Ordered Agenda Theorem Prover, Ph.D. Dissertation, University of Texas at Austin, Austin, Texas (1981). 228
1982
80
179
Multiple Inheritance in Smalltalk- Alan H. Borning Computer Science Department, FR-35 University of Washington Seattle, WA 98195 Daniel H. H. lngalls Xerox Palo Alto Research Center 3333 Coyote Hill Road Palo Alto, CA 94304 Abstract Smalltalk classes may be arranged in hierarchies, so that a class can inherit the properties of another class. In the standard Smalltalk language, a class may inherit from only one other class. In this paper we describe an implementation of multiple inheritance in Smalltalk. 1. Introduction Smalltalk is a powerful interactive language based on the idea of objects that communicate by sending and receiving messages [Ingalls 78, LRG 81, Goldberg 821. Every Smalltalk object is an instance of some class. Classes are organized hierarchically, so that a new class is normally defined as a subclass of an existing class. The subclass inherits the instance storage requirements and message protocol of its superclass. It may add new information of its own, and may override inherited responses to messages. In standard Smalltalk, a class can be a subclass of only a single superclass. On occasion, this restriction is undesirable and leads to unnatural coding styles. For example, the Smalltalk system includes a class Transcript that displays and records notification messages and the like. It is declared to be a subclass of Window, but also has the message protocol of a WriteStream to which one can append characters. Since it cannot be a subclass of both Window and WriteStream, the necessary methods for stream behavior must all be duplicated in Transcript. Such duplication is unmodular. If some method for streams is added or modified, the class Transcript does not automatically feel this change (as it would if it were a subclass of WriteStream). The natural solution is to allow classes to be subclasses of more than one superclass. In this paper we describe an implementation of multiple superclasses, which is now available in the Smalltalk- 80 system used within Xerox PARC. 2. Semantics of Multiple Superclasses A class may have any number of superclasses; however, an instance is always an instance of precisely one class. 2.1. Message Handling When an instance receives a message, it first checks the method dictionary of its own class for a method for receiving that message. If none is found, it searches the method dictionaries of its immediate superclasses, then their superclasses, and so on. If a single method is found, then it is run. If no method or more than one method is found, an error message is issued. The overriding of inherited methods is still allowed; it is an error only if a class with no method of its own inherits different methods from two or more of its superclasses. Further, it is not an error if the same method is inherited via several paths. (This is a simplified explanation; Section 4 describes our actual implementation.) 2.2. Access to Overridden Inherited Methods In single-superclass Smalltalk, the programmer can access an inherited overridden method using the reserved word super. For example, in code defined in a given class C, the inherited method for copy may be invoked using the expression super copy, even if C itself has a method for copy. This mechanism may be insufficient in the presence of multiple superclasses -- for example, if C inherits two different methods for copy, the user needs a way to indicate which is wanted. To allow for this, we extend the syntax of Smalltalk by adding compound selectors consisting of a class name, followed by a period, followed by the actual selector, e.g. Object.copy. When one of these compound selectors is used in a message, the lookup for the method starts with the class named in the compound selector. When there is no ambiguity, it is still convenient to be able to say “use the method inherited from my superclass” without naming that superclass. In analogy with the above form of compound selector, this can be accomplished by writing e.g. self super.copy. 234 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. Finally, there are times when one would like to invoke all the inherited methods for a given selector, rather than just one of them; the principal example of this is for the initialize method. To accomplish this, the programmer would write self a//.initia/ize. It would be straightforward to add other sorts of method combination schemes using this basic mechanism. 3. Examples of Using Multiple Inheritance In this section we present a number of examples that illustrate the usefulness of multiple inheritance. 3.1. Simula-style Linked Lists Simula, which has a single-superclass inheritance hierarchy, defines a list-processing package that supports doubly-linked lists [Birtwistle 731. The class Link specifies that each of its instances contain a reference to a successor and to a predecessor object. Subclasses of Link may then be defined that inherit this ability to be included in linked lists. An analogous class may be easily defined in Smalltalk. (An advantage of implementing linked lists in this way, rather than having a separate link object that simply points to an object X in the list, is that X can know about the list in which it resides.) However, there is a problem with the class Link in both Simula and single-superclass Smalltalk. Given an arbitrary existing class C, unless C already has Link in its superclass hierarchy, a programmer cannot use C in defining a new subclass that also has the properties of a Link. Multiple superclasses provide a natural solution. For example, if the programmer wants to make objects that are like windows and can also be included rn doubly-linked lists, he or she can simply define a new class Queueable Window that is a subclass of both Window and Link. The new class will inherit the instance state requirements and message protocol of both Window and Link, yielding the desired behavior. 3.2. Other Examples As mentioned in the introduction, another situation in which multiple inheritance is useful is in defining the class Transcript as a subclass of Window and of WriteStream. To take another example from the standard Smalltalk- system, a number of kinds of streams are defined, including ReadStream, WriteStream, and Read WriteStream. ReadWriteSfream is rather arbitrarily declared to be a subclass of WriteStream, with the extra methods needed for ReadStream behavior copied by the programmer. Using our new system, ReadWrifeSfream is naturally defined as a subclass of both ReadStream and WriteStream. 3.3. Pool Variables This last example is of a somewhat different nature. In addition to instance variables, the Smalltalk- language allows the programmer to define class variables that are shared by all instances of a given class and its subclasses. However, on occasion, the programmer wants variables that are to be shared by a number of non-hierarchical classes, but which aren’t properly declared to be global variables. A mechanism for handling this exists already: one may declare a dictionary of pool variables that may be shared among several classes. (An example of this is the FilePool of constants and variables that are shared by all the classes used in file I/O.) Multiple superclasses provide a more elegant solution. Rather than using pool variables, one can for example define a class FileOblect that has class variables corresponding to all the variables that used to be in FilePool. Each of the file classes can now be made a subclass of FileObjecf as well as of its old superclass, so that it has access to these shared variables. In this way, the pool mechanism becomes unnecessary and could be eliminated from the language. 4. Implementation 4.1. Finding the Right Method to Receive a Message Our implementation of multiple inheritance is a compromise between the extremes of strict runtime method lookup and copying down inherited methods from all superclasses. In the standard Smalltalk- system, methods inherited from superclasses are looked up dynamically. This has the advantage that the system is not cluttered with copied methods, and that there are no copies to update when a method is edited. An alternative would be to copy the inherited methods down into each subclass. This would make finding the methods easier at runtime, at the expense of greater code size and updating complexity. In our implementation of multiple inheritance, the standard dynamic lookup scheme is used for methods on the chain consisting of the first superclass of each class. If a class C has more than one superclass, at the time C is created it checks each message to which it can respond. If the appropriate method would be found by the dynamic lookup, nothing is done. However, if the appropriate method is in some other superclass, then the code for that method is recompiled in C’s method dictionary, so that it will be found at run time. Finally, if there are conflicting inherited methods for a given selector, an error method is compiled in C for that selector and the user is notified. These error methods are put into a special category, making it easy for the user to browse to them and to resolve the conflicts as necessary. 235 4.2. Implementation of Compound Selectors As described in Section 2.2, the programmer can access inherited methods using constructs such as se/f Object.copy, self super.copy, and self all.inifialize. To implement these extensions, we changed the Smalltalk parser to treat compound selectors as single symbols, so that the code that is compiled in C for e.g. self B.copy actually sends the selector &copy. The first time this is executed, no method for B.copy will be found. When this occurs, the interpreter invokes Object messageNotUnderstood. The usual behavior at this point is to bring up an error window. However, we modified Object messageNofUndersfood to first check for compound selectors. If one is found, then the system attempts to compile an appropriate method for that compound selector by first verifying that f3 is a superclass of C, and then looking for a copy method in B or its superclasses. If one is found, that method is recompiled in C under the selector B.copy. The system then resends the message, whereupon it will find the newly inserted method. The next time &copy is sent, this method will be found, making the operation efficient. Selectors such as super.copy and alLinitialize are handled by the same mechanism. 4.3. Instance State A subclass inherits all the instance field requirements of its superclasses, and can specify additional fields of its own. There is only one copy of fields inherited from a superclass via two inheritance paths. In our current implementation, it is an error if there are different inherited instance fields with the same name. (One of our previous experimental implementations [Borning 801 included a mechanism similar to the compound selector construct that allowed the programmer to disambiguate conflicting field names. We may re-introduce this mechanism if the present restriction proves too burdensome.) To access or store into instance fields, the bytecodes produced by the Smalltalk compiler include instructions such as “load instance field 1”. It is of course essential that code inherited from superclasses use the correct field positions for the subclass. Our scheme takes care of this in the following manner. The instance fields are arranged so that the fields inherited from the superclasses on the dynamic lookup chain have the same positions as they do in the superclasses. (This is the same situation as in single-superclass Smalltalk.) In general, fields inherited from other superclasses won’t be in the same positions, but when the code for methods from these other superclasses is recompiled into the new subclass, the field positions are adjusted appropriately. As an optimization, before recompiling a method from a superclass the system checks if the offsets of all the fields it references are the same in the subclass. If this is the case, then the system simply copies a pointer to the original method, rather than recompiling it. 4.4. Dynamic Updating In the Smalltalk environment, the user can add, delete, and edit methods incrementally, and then immediately make use of the changed code. In our multiple-inheritance implementation, some updating may be necessary when such changes are made. If a method is edited which has been recompiled or copied into some subclasses, then the newly edited method is recompiled or copied into subclasses as necessary. Similarly, if a method is added or deleted, it may affect which inherited method should be used, and may require changes in the copied inherited methods. Again, the system takes care of this updating automatically. If methods with compound selectors (e.g. super.prinfOn:) have been automatically compiled into some subclasses, then these methods may be invalid as well. Each such method that may no longer be valid is simply deleted; as described above, it will be recompiled automatically the first time a message is sent that invokes it. 4.5. A Note on the Implementation Process The changes required to add multiple inheritance to Smalltalk- 80 are only a few pages of Smalltalk code. For example, changing the Smalltalk syntax to allow compound selectors of the form Poinf.copy or Point. + required a change to only one method. Moreover, no changes to the Smalltalk- virtual machine were required. There are few other programming environments in which such a fundamental extension could be made so easily. 5. Relation to other Work A number of other systems have used multiple inheritance. Among the systems implemented in Smalltalk, the constraint laboratory ThingLab [Borning 811 and the PIE knowledge representation language [Goldstein and Bobrow 801 both supported multiple inheritance. The authors have also implemented some experimental predecessors of the present system [Borning 801. Some extensions to Lisp allow the use of similar object-oriented programming techniques. The “flavors system” in MIT Lisp Machine Lisp [Cannon 801 allows an object to be defined using several flavors (analogous to multiple superclasses); this system also contains an extensive repertoire of method combination techniques for combining inherited information. Another object- oriented Lisp extension with multiple inheritance is the LOOPS system [Bobrow and Stefik 821, implemented in Interlisp. The Traits system [Curry 821, imbedded in the Mesa system, is yet another multiple inheritance implementation. It has received extensive use in the coding of the Xerox Star office information system. 236 References [Birtwistle 731 Birtwistle, G.M., Dahl, O.-J., Myhrhaug, B., and Nygaard, K. SIMULA Begin. Auerbach Press, 1973. [Bobrow and Stefik 821 [Borning 801 Borning, A.H. Multiple Inheritance in Smalltalk. 1980. Unpublished report, Learning Research Group, Xerox PARC. [Borning 811 Borning, A.H. The Programming Language Aspects of ThingLab, A Constraint-Oriented Simulation Laboratory. ACM Transactions on Programming Languages and Systems 3(4):353-387, October, 1981. [Cannon 801 Cannon, H.I. Flavors. Technical Report, MIT Artificial Intelligence Lab, 1980. [Curry 821 Curry, G., Baer, L., Lipkie, D., and Lee, B. Traits: An Approach to Multiple Inheritance Subclassing. In ACM-SIGOA Conference on Office Automation Systems. ACM, June, 1982. [Goldberg 821 Goldberg, A.J., Robson, D., and Ingalls, D.H.H. Smalltalk-80: The Language and its Bobrow, D.G., and Stefik, M.J. LOOPS: An Object Oriented Programming System for Interlisp. 1982. Implementation. 1982. Forthcoming book. [Goldstein and Bobrow 801 Goldstein, I.P., and Bobrow, D.G. Extending Object Oriented Programming in Smalltalk. In Proceedings of the Lisp Conference. Stanford University, 1980. [Ingalls 781 Ingalls, D.H.H. The Smalltalk- Programming System: Design and Implementation. In Proceedings of the Fifth Annual Principles of Programming Languages Symposium, pages 9-16. ACM, January, 1978. [LRG 811 The Xerox Learning Research Group. The Smalltalk- System. Byte 6(8):36-48, August, 1981. 237
1982
81
180
GLISP: A High-Level Language for A.I. Programming Gordon S. Novak Jr. Heuristic Programming Project Computer Scicncc Department Stanford University ABSTRACT GLISP is a high-level LISP-based language which is compiled into LISP using a knowledge base of object descriptions. l.isp objects and objects in AI. rcprcscntation languages arc treated uniformly; this makcc program code independent of the data representation used, and permits changes of representation without changing code. GI,lSP’s object description language provides a powerful abstract datatype facility which allows the structures and properties of objects to be dcscribcd. Reference to objects is permitted in an English-like syntax, including dcfinitc rcfcrencc rclativc to the current context of the computation. Object-ccntcrcd programming is supported. When interfaced to a hierarchical rcprescntation language, GLISP can perform inheritance at compile time, resulting in substantial performance improvements. In addition, a LISP structure can be specified as the way of implcmcnting a class of objects in the rcprcscntntion language, making simple objects cfficicnt in both time and storage. 1. Introduction Progress in Al. is limited by our ability to manage the complexity inhcrcnt in A.I. problems; in particular, such problems often involve large numbers of aiffcrent types of objects, whose properties and interactions must bc modcllcd by a program. The need to manage a large number of object types has led to the dcvclopmcnt of a number of ~1. reprcscntation languages; howcycr, these languages have gcncrally suffcrcd from two problems. First, while the languages provide benefits for storage of data and inheritance of proccdurcs, access to objects must often be performed by low-lcvcl functions; in addition, the lyj)e of the data rctrievcd must often be remcmbcrcd by the programmer. These factors make programs difficult to change after their initial implcmcntation. Second, hierarchical reprcscntation languages have achicvcd power in data rcprcsentntion at the cost of performing data access interpretively at runtime; this has a high cost in execution time. GLISP solves these problems, as well as being a powerful programming language in its own right. Data objects are described separately from code which refercnccs the objects, making code largely rcprcscntation-indcpcndcnt, as well as shorter and more understandable. Type inference is pcrformcd when features of an object arc acccsscd, and type information is propagated by the compiler during compilation. A change in rcprcscntation, made in only one place, is rcflectcd throughout the program upon rccompilntion. GLISP compiles efficient code for access to object propel-r&. By performing lookup of inhcritcd properties at compile time, expanding definitions of property access methods into open code, and using Lisp structures to implcmcnt instances of objects dcscribcd in an A.I. rcprcscntation Ianguagc, GI.ISP can greatly increase the execution cfficicncy of programs using such a language. ‘The following example function illustrates some of the fcaturcs of GLISP: (GIVE-RAISE (GLAMBDA ( {A COMPANY) ) (FOR EACH ELECTRICIAN WHO IS NOT A TRAINEE DO (SALARY ++ (IF SENIORITY > 2 THEN 2.50 ELSE 1.50)) (PRINT THE NAME OF THE ELECTRICIAN) (PRINT THE PRETTYFORM OF DATE-HIRED) (PRINT MONTHLY-SALARY) ) )) The G1,AMBDA in the function definition causes the IN’I’IXIXP intcrprcter to call the GLISP compiler to incrcmcntally compile the fitnction the first time it is rcfcrcnccd. In this example, onlp the fyye of the function’s argument, (A COMPANY), is spccificd; this argument provides an initial Conlexf of computation. Since GLlSP allows definite refcrcnccs to features of objects which are in Context, it is not always ncccssary to give variable names to objects. With a COMPANY in Context, the dcfinitc reference to the t31,1KZTRICIANs can bc understood by the compiler as the ELK’~RlClANs of that COMPANY. Within the FOR loop, the current I%XIRlCIAN is in Context, permitting rcfcrences to SALARY, SENIORITY, etc. to bc resolved. 238 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. Such a function is compiled relafive fo a knowledge base of object descriptions; the GLISP code itself is indepcndcnt of the actual data representations used. Separation of object descriptions from code permits substantial changes to object structures with no changes to the code. SENIORITY, for example, might be stored directly as part of an EMPLOYEE object, or it might be computed from other properties; for example, SENIORITY could be defined as ( (THE YEAR OF (CURRENTDATE)) - (THE YEAR OF DATE-HIRED) ) where (CURRENTDATE) is a function which returns the current date and DATE-HIRED is a substructure of an l<MPI,OYEE. Uniform trcatmcnt of stored and computed properties facilitates hiding the distinction between properties which are actually stored and those which are computed as needed. The compiled IJSP code produced by GLISP is nearly as cfficicnt as hand-coded 1 .ISP; the user must pay for compilation, but dots not incur a substantial runtime penalty. Ordinary LISP is a subset of GIJSP; normal IJSP code is simply passed through by the GLISP compiler without change. (VECTOR (LIST (X INTEGER) (Y INTEGER)) PROP ( (MAGNITUDE ((SQRT Xr2 + Yr2)) ) ) ADJ ( (ZERO (X IS ZERO AND Y IS ZERO)) (NORMALIZED (MAGNITUDE = 1.0)) ) MSG ( (+ VECTORPLUS (- OPEN T) (PRINI VECTORDIFFERENCE OPEN T) ((PRINl ‘I(“) (PRINl X) (PRINl “, “) (PRINl Y) (PRINl ")"))) (PRINT ((Send self PRINl) 1 (TERPW) > 1 The actual structure of the object is a IJST of two INTEGERS, X and Y. The language of Structure Descriptions allows the standard LISP structures to bc specified. Objects may be dcfincd in terms of other objects, and can inherit the features and behavior of component objects. The rest of the Object Description defines Properties, hdjcctivcs, and Mcssagcs for the object. ‘I’hc MAGNITUDE is dcfincd in terms of a LISP function, SQRT, whose argument is an cxprcssion involving definite refcrcnccs to the X and Y components of the VECTOR. Adjectives are used in predtcatc cxprcssions on objects, e.g., (IF THE VECTOR IS NOT NORMALIZED THEN . ..). The adjcctivc NORMAI~IZED is easily dcfincd using dcfmitc rcfcrcnce to the property MAGNI’I‘UDF,, which in turn is dcfincd in terms of other propcrtics. 3. Definite Reference and English-Like Programming GLISP permits English-like stimmcnts, including dcfinitc rcfercnce to objects which arc in the current computational Cojzlext. The Context, a compile-time construct which initially contains the arguments of a function, is propagated ,through the program along control flow paths by the compiler. Newly referenced objects are added to Context, so that their fcaturcs may then bc rcfcrenccd directly. A variable may be referenced by its name (“X”) or its type ("The ELECTRICIAN”). A substructure or Property of an object may be rcfcrcnccd in English-like forms ("The SALARY of X" or "The SALARY" or "SALARY") or in PASCAL,-like syntax ("X :SALARY"). The forms "The ELECTRICIAN", "The SALARY" and "SALARY" are dcfinitc rcfcrences, since the source object is unspecified; such rcfcrcnces are resolved by finding an object in the Context which is of the specified type or has the specified property. Definite reference to the member(s) of a group which satisfy a spccificd condition is also allowcd,asin (The Slot with SlotName -1 NAME)or (Those Faculty-Members who are Tenured). Definite reference allows programs to be shorter and more readable, allows easy definition of object properties in terms of other propcrtics, and makes programs easier to modify. Definite refcrencc is potentially ambiguous (i.e., there could be multiple objects in the Context with a specified property); however, in practice this has not proved to bc a problem. The ordering of the Context search by the compiler and the user’s skill in use of dcfinitc rcfcrcncc as a spcakcr of natural language gcncrally prcvcnt inadvcrtcnt rcfcrcnccs; a rcfcrcncc can always be made unambiguous by specifying the source object. 4. Messages Object-Ccntcrcd Programming, inspired by SIMU1.A [l] and SMAI.I~I‘AlK [2] [3], has bccomc increasingly popular; it \icws objects as active cntitics which communicate by exchanging filessnges. A Message specifics the destination object, a Seleclor which identifies the message type, and optional Argumenls. When a mcssagc is received, the receiving object looks up the Selector to find a corresponding proccdurc to respond to the message and cxecutcs it. Typically, objects 239 are organized in a Class hierarchy, so that the procedure which is used to execute a message to an object is inherited from the object’s Class. GLTSP permits optimized compilation of Messages. An Object Description contains the Selector and associated Response for each message the object can receive. The Response may be either GLJSP code, which is recursively compiled in-line, or a function name, in which cast a direct call to the function (or open code) is compiled. Properties and Aa’jeclives are compiled as if they were messages without arguments. Rccausc the Response to a message can bc GLJSP code which is compiled recursively in the context of the object in question, and which can use definite rcfercncc to access substructures and propel-tics of the object, it is easy to dcfinc properties in terms of other propertics. If arithmetic operators arc defined as message Selectors for a class of objects, arithmetic expressions involving objects of that type will be compiled as calls to the corresponding Response ftlnctions, as illustrated in the VECTOR example above; open compilation of the Response for operators allows the resulting code to bc efficient. 5. GLISP and Knowledge Representation Languages An intcrfacc is provided to allow GIJSP to be used with a knowledge representation language of the user’s choice: each Class of objects in the representation language becomes a valid type for GLISP. Use of a reprcscntation language gives GLISP additional power, since properties and mcssagcs can be inherited through the language’s hierarchies. In addition, G LISP can significantly improve runtime pcrformancc by doing procedural inheritance at compile time and compiling a direct call to an inherited procedure; for simple functions, open compilation avoids the function call as well. Messages which cannot be resolved at compile time are interpreted at runtime, as usual. Messages which in fact specify data access can be rccognizcd as such and can bc compiled to perform the data access directly. Rcprescntation languages often involve structural overhead which is costly in terms of storage and access time; the overhead encourages users of such a language to have a mixed representation, with complex objects represented in the language and simple objects represented as LISP structures which are manipulated directly. GLISP allows the user to have the best of both worlds. A LISP structure can be specified as the way of implementing a Class of objects. While such an object appears the same as other objects in the representation language in terms of its definition and access language, direct LISP code is compiled for all accesses to its data values. In this way, all objects appear uniform, but simple objects are represented efficiently. Recursive compilation of object properties can be used to achieve a certain amount of automatic programming. For example, a knowledge base can define the property DENSITY as MASS/VOLUME at a high level, for all physical objects. If a function references the DENSITY of a PLANET object, this definition of density will be inherited and compiled recursively in the cotrtext of the origitlal object, namely a PLANET. If PLANET has SPHERE as one of its parent classes, the definition for VOIUME can be inherited from SPHERE. The result of such recursive compilation is that different in-line code is compiled for different kinds of objects from the same high-level definition of DENSITY. 6. Discussion GLISP provides several novel and valuable language features: 1. Object Descriptions provide a powerful abstract datatype facility, including Properties, Adjcctivcs, Mcssagcs, and operator overloading for user dntatypcs. Objects may be composed from other objects, inheriting their propcrtics. 2. Objects in Knowlcdgc Representation I,anguages, as well as LISP objects, can bc used in a uniform fashion. 3. English-like programming, including rclativc to the Context, is allowed. 4. Optimized compilation is performed for hierarchical representation languages. definite reference These capabilities arc integrated, and reinforce each other synergistically. While other systems, e.g. CLISP [4] and Flavors [5] provide some of the fcaturcs of GLJSP. none (to our knowledge) do so in a way that is integrated and allows the user to choose the data representation. 7. Implementation Status GLISP [6], including all features described in this paper, is currently running. To date, it has been interfaced to the GIRL representation language, and to the object-ccntcred representation language LOOPS [7]. GLISP was originally implemented within INTERLISP; it is currently available for INTERLJSP, MACLISP, FRANZ LISP, and UC1 LISP. GLISP thus provides a high-level Lisp-based language which is transportable across the major dialects of Lisp. 240 8. Example This section illustrates how the example function discussed earlier is compiled for a particular choice of data structures. The compiler optimizes the iteration to avoid explicit construction of the set of ELECTRICIANS. (GIVE-RAISE (GLAMBDA ((A COMPANY)) (FOR EACH ELECTRICIAN WHO IS NOT A TRAINEE DO (SALARY *+(IF SENIORITY > 2 THEN 2.5 ELSE 1.5)) (PRINT THE NAME OF THE ELECTRICIAN) (PRINT THE PRETTYFORM OF DATE-HIRED) (PRINT MONTHLY-SALARY)))) (GLISPOBJECTS (COMPANY (ATOM (PROPLIST (PRESIDENT (AN EMPLOYEE)) (EMPLOYEES (LISTOF EMPLOYEE)))) PROP ((ELECTRICIANS ((THOSE EMPLOYEES WITH JOBTITLE=‘ELECTRICIAN))))) (EMPLOYEE (LIST (NAME STRING) (DATE-HIRED (A DATE)) (SALARY REAL) (JOBTITLE ATOM) (TRAINEE BOOLEAN)) PROP ((SENIORITY ((THE YEAR OF (CURRENTDATE)) -(THE YEAR OF DATE-HIRED))) (MONTHLY-SALARY (SALARY * 174))) 1. With thcsc data structure definitions, the function GiVEi-RAISE is compiled (for INTl~RI,ISP) as follows: (GIVE-RAISE (LAMBDA (GLVARl) (MAPC (GETPROP GLVARl (QUOTE EMPLOYEES)) (FUNCTION (LAMBDA (GLVAR2) (AND (EQ (CADDDR GLVARE) (QUOTE ELECTRICIAN)) (COND ((NOT (CAR (NTH GLVAR2 5))) [RPLACA (CDDR GLVAR2) (PLUS (CADDR GLVAR2) (COND ((IGREATERP (IDIFFERENCE (CADDR (CURRENTDATE)) (CADDR (CADR GLVAR2))) 2) 2.5) (T 1.5-J (PRINT (CAR GLVARSI)) [PRINT (PROG ((self (CADR GLVAR2))) (RETURN (LIST (CADR self) (CAR (NTH (QUOTE (January February March April May June July August September October November December)) (CAR self))) (CADDR self] (PRINT (TIMES (CADDR GLVAR2) 1741 ADJ ((HIGH-PAID (MONTHLY-SALARY>2000))) 2. ISA ((TRAINEE (TRAINEE))) MSG ((YOURE-FIRED (SALARY + 0))) ) 3. (DATE (LIST (MONTH INTEGER) (DAY INTEGER) (YEAR INTEGER)) PROP ((MONTHNAME ((CAR (NTH ‘(January February March April May June July August September October November December) M0N-W))) (PRETTYFORM ((LIST DAY MONTHNAME YEAR))) (SHORTYEAR (YEAR - 1900)))) 7. References Birtwistle, Dahl, Myhrhaug, and Nygnard, S’ZnlULn BZ%ZN, Auerbach, Philadelphia, PA, 1973. Ingalls, D., “The Smailtalk-76 Programming System: Design and Implementation,” 5th ACM Symposium 011 Pritzciples of Programming Latrgunges, ACM, 1978, pp. 9-16. Goldberg, A., et al., BYTE h4agazine, Special Issue on Smalhlk., August, 1981 . Teitclman, W.Xcrox Palo Reference Manual, 1978. Alto Research Ccntcr, INTER LISP Cannon. Howard I., “Flavors: A Non-Hierarchical Approach to Object-Oriented t’rogramming,” Tech. report (unnumbered), M.I.T. A.I. Lab, Oct. 1981. Novak, Gordon S., “Gl-ISP Reference Manual,” Tech. report STAN-CS-895, Computer Science Dept., Stanford Univ., Jan. 1982. Bobrow, D.G. and Stcfik, M., “The L,OOPS Manual,” Tech. report K B-VLSI-81-13, Xerox Palo Alto Research Center. 1981. 241
1982
82
181
DADO: ATREE-STRUCXUREDMACRINE ARCRITECTURE FoRERODUcTIONSYSTEMS* Salvatore J. Stolfo and David Elliot Shaw Coltiia University ABs!rRAcT DAD3 is a parallel tree-structured machine designed to provide highly significant performance improvements in the execution of large Production systems. The DAD0 machine comprises a large (on the order of a hundred thousand) set of processing elements (PE's), each containing its awn processor, a small amount (2K bytes, in the current design) of local random access memory, and a specialized I/O switch. The PE's are interconnected to form a complete binary tree. This paper describes a general procedure for the parallel execution of production systems on the DAD0 machine, and outlines in general terms how this procedure can be extended to include conmutative and multiple, independent production systems. 1. Introduction DAD0 [Stolfo and Shaw, 19811 is a parallel, tree-structured machine designed to provide highly significant performance improvements in the execution of production systems. A production system [Newell, 1973; Davis and King 1975; Rychener, 19761 is defined by a set of rules, or productions, which form the production memory (PM), together with a database of assertions, called the workinq memorv MM). Each production consists of a conjunction of pattern writs, called the left- &@ side (LHS) of the rule, along with a set of actions called the a-hand side (RRS). The RRS specifies information which is to be added to (asserted) or removed from WM when the LHS successfully mtches against the contents of WM. In operation, the PS repeatedly executes the following cycle of operations: 1. Match: For each rule, determine whether the LRS matches the current environment of WM. ---- *This research was supported in part by the Defense Advanced Research Projects Agency under Contract N00039-82-C-0427. 2. Select: Choose exactly one of the mtching rules according to same predefined criterion. 3. A&: Add to or delete from WM all assertions spcified in the RI-IS of the selected rule. In this paper, data elements in WM will have the form of arbitrary ground literals in the first order predicate calculus. For pedagogical reasons, we will restrict our attention to the case in which both the LHS and RRS are conjunctions of predicates in which all first order terms are composed of constants and existentially quantified variables. (DAD3 in fact supports the incorporation of universally quantified variables in the LRS of a production as well, but an adequate treatment of this case would substantially complicate our exposition, and has thus been titted. The interested reader is referred to a discussion of the LSEC algorithm for logical satisfaction, presented in a doctoral dissertation by Shaw [ 19801.) A negated pattern in the LHS causes the matching procedure to fail whenever WM contains a matching ground literal, while a negated pattern in the RRS causes all matching data elements in WM to be deleted. An example production is presented in Figure l-l. (Variables are prefixed with an equal sign.) (Part-category =part electronic-component) (Used-in =part =product) (Supplied-to =product =custaner) (NOT Manufactured-by apart --cstmr) -> (Dependent-on =customer =part) (WI Independent =customer) Figure l-l: An Example Production Because the matching of each rule against WM is essentially independent of the others (at least in the absence of oontention for data in WM), it is natural to attempt a decanposition of the matching portion of each cycle into a large number of tasks suitable for physically concurrent execution on parallel hardware. While the design of special- purpose parallel machines adapted to artificial intelligence applications has attracted scme attention [Fahlman, 1979; Fuhlrott, 19821, little 242 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. progress has been made in the application of highly concurrent hardware to the execution of rule-based systems. Forgy [1980] proposed a very interesting use of the ILLIAC IV machine for such applications, but recognized that his approach failed to identify all matching productions under certain circumstances. In this paper, we describe a tree-structured machine architecture that utilizes the emerging technology of VLSI systems in support of the highly efficient parallel execution of large-scale production systems. Portions of the machine, which we have come to call DAIXJ, are now in the early stages of construction at Columbia University. We believe a full-scale DADO prototype, capble of significant performance improvements over implementations based on von Neumnn machines, to be technically and economically feasible for implementation using current technology. When a DAD0 PE enters MIMD mode, its I/O switch settings are changed in such a way as to effectively "disconnect" it and its descendants fram all higher-level PE's in the tree. In particular, instructions a PE in MIMD mode does not receive any that might be placed on the tree- structured cornnunication bus by one of its ancestors. Such a PE may, however, broadcast instructions to be executed by its own descendants, providing all of these descendants have themselves been switched to SIMD mode. The DAD0 machine can thus be configured in such a way that an arbitrary internal node in the tree acts as the root of a tree-structured, NON-VDN-like SIMD device in which all PE's execute a single instruction at a given point in time. As in NON-VCN, the DAD0 I/O switch supports cmunication between physically adjacent neighbors (parents and children) within the tree in addition to broadcast-based ccprmunication. 2. The DAD0 Machine Architecture 3. Allocation of Productions and working Memory The DAD0 machine cchnprises a very large (on the order of a hundred thousand) set of processing m (PE's), each containing its own processor, a small amount (2K bytes, in the current design) of local randam access memory, and a specialized I/O switch. The PE1s are interconnected to form a canplete binary tree. Certain aspects of the DAD0 machine are modelled after KN-VON [Shaw, 1979; Shaw, et al., 19811, a tree-structured, highly parallel machine containing a larger number of much simpler processing elements. In NON-VON, most of the PE's are severely restricted in both processing power and storage capacity, and are thus not typically used to execute independent programs. Instead, a single control processor, located at the root of the NON- VON tree, typically broadcasts a single stream of instructions to all PE's in the tree. Each such instruction is then simultaneously executed (on different data) by every PF in the tree. This mode of operation has been referred to in the literature of parallel canputation as single instruction stream, multiple data stream (SIMD) execution [Flynn, 19721. (!Che above description is in fact somewhat oversimplified, since EJ(;N-VON in fact permits independent instruction streams to be broadcast to selected subtrees. Such subtrees, though, must be rooted at a single, fixed level within the tree, where additional processing power is available.) Within the DAD0 machine, on the other hand, each PE is capable of executing in either of two modes. In the first, which we will call m mode, the PE executes instructions broadcast by scone ancestor PE within the tree, as in the KBWON mchine. In the second, which will be referred to as JIIMD mode (for multiple instruction stream, multiple data stream), each PE executes instructions stored in its own local RAM, independently of the other PE's. In order to execute the production system cycle, the I/O switches are configured in such a way as to divide the DADC rrrachine into three conceptually distinct components. One of these canponents consists of all PE's at a particular level within the tree, called the m level, which is chosen in a manner to be detailed shortly. The other two components are the uouer portion of the tree, which canprises all PE's located above the PM level, and the lower portion of the tree, which consists of all PE's found below the PM level. This functional division is illustrated in Figure 3-l. Each PE at the PM level is used to store a single production. The PM level must thus be chosen such that the number of nodes at that level is at least as large as the number of productions in PM. The subtree rooted by a given PE at the PM level will store that portion of WM that is relevant to the production stored in that PE. A ground literal in WM is defined to be relevant to a given production if its predicate sy&ol agrees with the predicate symbol in one of the pattern literals in the LHS of the production, and all constants in the pattern literal are equal to the corresponding constants in the ground literal. Intuitively, the set of ground literals relevant to a given production consists of exactly those literals that might match that production, given appropriate variable bindings. Ihe constituent subtrees that make up the lower portion of the tree will be referred to as the m-subtree For simplicity, we will assume in this paper thi< each PE in a WM-subtree rooted by some production contains exactly one ground literal relevant to that production. (Using "packing" techniques analogous to those employed in NON-VON, however, this assumption is easily relaxed at the expense of a modest cost in time.) It should be noted that, since a single ground literal may be 243 -PM Level: match, determine relevance 8 instantiate relevant to more than one production, portions of WM may in general be replicated in different WM- subtrees. During the match @se, the WM-subtrees are used as co&&-addressable memories, allowing parallel zatching in time independent of the size of WM. The upper portion of the tree is used to select one of the matching productions to be executed, and to broadcast the action resulting frcan this execution (both in O(log P) time, where P is the number of productions). Details of these functions follow. 4. The Matching Phase At the beginning of the matching phase, all PE's at the PM level are instructed to enter MIMD de, and to simultaneously (and independently) match their LHS against the contents of their respective WM-subtrees. The ability to concurrently match the LkIS of all productions accounts for some, but not all, of the parallelism achieved in DADO's matching phase. In addition, the matching of a sinule LHS is performed in a parallel manner, using the corresponding WM-subtree as an associative processinq device. The simplest case involves the matching of a single LJIS pattern predicate containing at most one instance of any variable. In order to match the predicate (Part-category =pert electronic-component), Figure 3-l: Functional Division of the DAD0 Tree for example, the PM-level PE corresponding to the production in question would first broadcast a sequence of instructions to all PE's in the WM- subtree that would ause each one to simultaneously canpare the field beginning in, say, its fifth PAM cell with the string "Part-category". All non- matching PE's would then be -bled causing all subsequent instructions to be ignokd for the duration of the match. Next, the string "electronic-component" would be broadcast, along with the instructions necessary to match this string against, say, the field beginning in the thirty-fifth RAM location of all currently enabled PE's. After again disabling all non-matching PE's, the only PE's still enabled would be those containing a ground literal that matches the predicate in question. If this were the only predicate in the LHS, matching would terminate at this point. It should be noted that the time required for this matching operation depends only on the complexity of the pattern predicate, and not on the number of ground literals stored in the wM- subtree. The general matching algorithm, which accoxmodates a LHS consisting of a number of conjoined predicates, possibly including common pattern variables, is considerably more ccznplex. While space does not permit a complete exposition of the general algorithm, readers familiar with the literature of relational database systems, and in particular, database machines, may find the following brief corrur'ents illuminating. First, we note that the set of all ground literals in a 244 single WM-subtree may be regarded as canprising several relations, each the extension of sane pattern literal. Viewed in this way, the general production matching problem reduces to a problem for which Shaw [1980] has proposed, and simulated in software, a highly efficient solution involving the use of associative hardware to evaluate relational algebraic primitives in parallel. The result is a new relation embodying the variable bindings corresponding to all possible instantiations of the production in question that are consistent with the contents of WM. 5. The Selection Phase Since each production is asynchronously matched against the data stored in its WM-subtree, the production matching phase will in general terminate at different times within each PM-level PE. At the end of the matching phase, the PM-level PE's must thus be svnchronized before initiation of the selection phase. In support of this synchronization operation, each PM-level PE sets a local flag upon canpletion of its own matching task. The I/O switch contains combinatorial hardware that permits the DAD0 tree to compute a logical conjunction of these flags in time equal to O(log n) gate delays. DADO's tree-structured topology, along with the combinatorial, as opposed to sequential, canputation of this n-ary "logical AND", lead to a synchronization time which is dominated by that required for matching, and which mYI in practice, be ignored in analysis of the time complexity of the production system cycle. The selection of a single production to "fire" from among the set of all matching productions also requires time proportional to the depth of the tree. Unlike the synchronization operation, however, the primitive operations required for selection are ccanputed using sequential logic. We assume that each PM-level PE performs scme local ccanputation prior to the synchronization operation that yields a single, numerical priority rating. PE's containing matching productions are assigned positive values, while other PM-level PE's are assigned a priority of zero. We also assume that each PM-level PE has a distinct mu, stored in a fixed location within its local memory, which may be used to uniquely identify that PE. After synchronization, all PM-level PE's are instructed to enter SIMD mode. Each such PE is then instructed to send its priority rating to its parent. Each parent cornperes the priority ratings of its two children, retaining the larger of the two, along with the unique tag of the "winner". The process is repeated at successively higher levels within the tree until a single tag arrives at the root. This tag is then broadcast to all PM- level PE'S for matching, disabling all except the one having the highest priority rating, which remains enabled for the action phase. 6. Ihe Action Phase At this point, the "winning" PE is instructed to instantiate its RHS, which is then broadcast to the root. Next, all PM-level PE's are enabled, and the RHS of the winning instance is broadcast to all. The details of the action phase are made more coanplex by the importance of avoiding unnecessary replication of WM literals within the lower portion of the tree, and of reclaiming local memory space freed by the deletion of such literals. These functions are based on associative operations similar to those employed in the matching operation. The HZ's at the PM level are instructed to enter MIMD mode and to concurrently update their WM-subtrees as specified by the RHS of the winning instance. First, the PM-level PE's perform an associative probe for each literal to be deleted frcin WM, enabling only those PE's in the WM- subtrees whose local memories are to be reclaimed. The enabled PE's are then instructed by the EM- level PE to overwrite their stored ground literal with a special free-t= identifying empty PE's. This tag is the target of the subsequent associative probe executed for each of the ground literals to be added to WM. When processing an asserted literal, the RI- level PE first determines whether or not the literal is relevant to its stored production. Next, the associative operation identifies those relevant literals which are not present in the WM- subtree, and thus are to be stored in some empty PE. After probing for the free-tag, all PE'S are disabled except the empty PE's. To avoid duplication of asserted literals, all but one of these PE's is disabled by a multip& match resolution scheme which uses combinatorial hardware in the I/O switch to very rapidly clear a flag in all but an arbitrary "first" enabled PE. The asserted literal is then broadcast to the one enabled PE. As in the matching phase, the action phase in general will terminate at different times in each PM-level PE. After synchronization, another 'cycle of production systan execution begins with the production matching phase. 7. Specialized Production Systems The general scheme for production system execution on DADO can be extended to support cmutative production systems, as well as "cooperating expert systems" based on multiple, independently executing production systems. the A carmutative production system allc~s each of matching rules on every cycle of operation to 245 be selected for execution. The same canbinatorial hardware used in the action phase to select a single arbitrary "free" PE supports this operation by enumerating each of the matching productions in an arbitrary sequential order. Each of the RHS's so reported to the root are then processed by the action phase. In our exposition of the general production system algorithm, it was assumed that the upper tree was rooted at the (physical) root of DAD0 (see Figure 3-l). Since each PE in the DAD0 tree can execute its own independent program, the upper tree can be rooted at an arbitrary internal node of DADO. Thus, multiple, independent production systems are executed on the DADO machine by rooting a forest of upper trees at the same fixed level of the DADG tree. Ccmnunication among these independent production systems is implemented in the same fashion as ccmnunication among the PM- level PE's during the action phase of the (cmutative) production system cycle. REFERENCES Davis, Randall and Jonathan King. & Overv& a Production St/stems. Technical Report, Stanford University Canputer Science Department, 1975. AI Lab Memo, AIM-271. Fahlman,ScottE. a &&net mterconnection Scheme. Technical Report 125, Department of Computer Science, Carnegie- Mellon University, 1979. Flynn, Michael J. Some computer organizations and their effectiveness. Transactions QB Ccmmuters, 948-960, September, 1972. Forgy, Charles L. _A Note on Production,Wstems and IrlLIAC m. Technical Report 130, Department of Ccanputer Science, Carnegie-Mellon University, July, 1980. Fuhlrott, Oskar. Bibliography on AI Machines. SIGART w (79), January, 1982. Newell, Allen. Production Systems: Models of Control Structures. In w. chase (editor), Information Visual Processinq, Academic Press, 1973. Shaw, David Elliot. Architecture for && Relational asebraic tabase primitives Technical Report *STAN-(3-79-778, Department of Tr Science, Stanford University, October, . Shaw, David Elliot. Knowledqe-Based Retrieval m a Relational Database of Ccanputer Science, Shaw, David Elliot, Salvatore J. Stolfo, Hussein Ibrahim, Bruce Hillyer, Gio Wiederhold and J. A. Andrews. The NON-VON Database Machine: A Brief Overview. J&abase Epuineeru 4(2), December, 1981. Stolfo, Salvatore J. and David Elliot Shaw. Specialized Hardware for Production Systems. Technical Report, Department of Computer Science, Columbia University, August, 1981. Rychener, Michael. production j3vstems M B B m for Btificial BN. PhD thesis, Department of Ccanputer Science, Carnegie-Mellon University, 1976. 246
1982
83
182
Toward Connectionist Parsing Steven Small, Gary Cottrell, and I,okendra Shastri Department of Computer Science The University of Rochester Rochester, New York- 14627 Abstract The parsing of natural language is the product of dense interactions among various comprehension processes. We believe that traditional models have greatly underestimated the richness of these interactions. We propose a model for low-level parsing which is massively parallel, highly distributed, and highly connected. The model suggests a solution to the problem of word sense disambiguation which is psychologically plausible and computationally feasible. The paper outlines the general connectionist paradigm followed by a brief description of a three-level network to do parsing. Finally we trace through an example to illustrate the functioning of the model. introduction The parsing of natural language must be studied in the full context of all other comprehension processes that depend or are depended upon by the parsing process. While the study of these different processes has led to interesting and valuable results, we believe that the interactions among them are not as low in bandwidth as traditional models would imply. Our work involves trying to model the high connectedness of these aspects. The interpretation of metaphor and indirect speech acts, for example, and the processes of word sense disambiguation and anaphoric reference seem to require a significant application of general memory skills (i.e., “non-linguistic knowledge”) in order to carry out. Further, it seems that any strictly serial decoupling of associated subtasks, i.e., one major subtask and then the next and so on, does not permit enough interdependence of function to explain adequately reasoning tasks that people find quite easy and natural to perform. In this paper we sketch a model of natural language understanding which deals with the problem of word sense disambiguation in what we believe to be a psychologically plausible manner. Word sense ambiguity is an interesting problem because people do it easily, and the task requires knowledge from many sources. Small [l], for example, lists 57 different senses for the word “take,” which depend on the local context of the following words. This is not an isolated phenomenon. In an informal study, Gentner [2] found that the 20 most frequent nouns have an average of 7.3 senses each; the 20 most frequent verbs have an average of 12.4 senses each. We believe that a model which tries to emulate how people understand language must have at its core a clean and efficient disambiguation mechanism. Previous work in this area by Wilks [3], Riesbeck and Schank [4], and Small and Rieger [5] have been adequate as high-level models, but we emphasize a processing structure which is closer to the neuronal hardware, as we believe these models will exhibit consequences not found in sequential, symbol-passing models. (For more discussion of this, see [6, 7, 8, 9, and lo].) Our model will use a uniform processing framework, be able to maintain multiple hypotheses in parallel, and switch interpretations easily based on subsequent context. We base our model in part on recent studies in lexical access by Swinney [ll] and Seidenberg et al. [12], which demonstrate that when people hear an ambiguous word, at least two senses of the word are initially active, but by 200 msec later, only the one appropriate to the context is still active. A cornerstone of our model is parallel access of all meanings followed by an interaction of these meanings with the context of the rest of the sentence. This will result in only one meaning remaining highly active. Connectionist Models We have in mind a particular new approach to the study of natural language comprehension. These are the massively parallel models of the sort currently under development by Feldman and Ballard [7], which they call connectionist models. The basic element of the connectionist model is the computing unit. A unit is characterized by a continuous valued potential (or confidence) between -1 and 1, a vector of inputs, a small set of states, and a single output, which is an arbitrary (preferably simple) function of the inputs. Thus the firing of a unit may be based on logical combinations of input values. The connection pattern is prewired; only the weights on connections may change. Connections may be inhibitory or excitatory, giving the model much flexibility. A coalition of connected units which are mutually reinforcing corresponds to a percept; a stable coalition is one in which the overall excitation exceeds the overall inhibition. The fundamental premise of the connectionist framework is that individual units do not transmit large amounts of symbolic information, but compute by being appropriately connected to large numbers of similar units. The technique used to encode information in the network is called the unit/value principle. That is, a unit 247 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. represents a value of a parameter. A unit receives input, from other units which provide positive or negative “evidence” for the value the unit represents. If it gets enough positive input, it will “fire.” The output is a confidence measure which is transmitted to all connected units. We posit an exploded case representation; that is, we use a large number of case roles that are more specific than AGENT, OBJECT, etc., but fall into those classes. These nodes thus represent some typing information ‘on case roles for predicates. A predicate at the word sense level excites its case nodes at this level, with a sequencing mechanism increasing the activitv of unfilled case nodes as Connectionist Parsing others are filled (this setting ;p expectations). We propose a three-level network to represent the parsing system, shown in Figure 1. One central goal of our research is to construct an experimentally testable theory of the organization of these levels. In general, the design principles of these spreading activation networks must evolve through the data of psychological experimentation and computational simulation. We acknowledge that the networks illustrated in the next section are at an early stage in this evolution. A Simple Example Analysis /“jT. input from phoneme perception network Let us now work through the analysis of an example sentence, showing how the final interpretation comes to be decided upon despite a sequence of (wrong) intermediate hypotheses. Consider the following sentences: Figure 1. Proposed Parsing Network. We assume input to the lexical level comes from a word perception network such as that described in McClelland and Rumelhart [8]. Note that we do not have an explicit syntax level. Syntax information is embedded in every level of the network in various ways. An overview of the function of each level follows. More detail is given in [6]. The Lexical Level. This is the input level of our network. We do not try to account for phoneme effects at this stage of the research. The units at this level roughly represent morphemes, although we do not strictly follow the linguistic definition. Thus we represent hearing or reading a sentence such as “John loves Mary” by activating the “John,” “loves,” and “Mary” units sequentially, with a model-dependent delay between them. The Word Sense Level. Units on the lexical level are connected to units representing their different senses at this level. For example, “cart” at the lexical level is connected to WAGON (the noun sense) and TRANSPORT (the verb sense) at this level. We represent idioms by conjunctive connections; that is, “throw” and “up” would conjunctively connect to VOMIT--both would have to be active for the VOMIT node to become active. We conceive of this level as divided into three subnetworks, the noun phrase or “object reference” network, the predicate or “relational” network, and the function word network. We separate the first two for reasons given in [2], and because they play a separate role in connecting to the case logic level. The third subnetwork arises from the role of function words as linguistic operators, and the (controversial) evidence that particular lesions may impede their use. The Case Logic Level. This level expresses the possible relationships (bindings) between predicates and objects. (a) “A man threw up a ball.” (b) “A man threw up dinner.” We shall illustrate the analysis of sentence (a), while explicitly taking into consideration the possibility of interpretation (b) along the way. One thing that we shall not do in this section is to explain the role of context (other than intra-sentential) in the process; while the existing state of the connection network has activity corresponding to the analyses of previous perceptual inputs (including language), that will not come into play in this example. As the phrase “a man” is heard, the appropriate processing units become activated at the lexical level, causing the activation of unit SOMEMAN at the word sense level. Note that activation does not only flow from lower levels up, but that feedback activation flows down as well, forming a stable coalition among the units for “a”, “man”, and SOMEMAN. In addition, there is inhibition between SOMEMAN and SOMEWOMAN, representing the notion that a person cannot be both. Next, driven from below, the lexical unit for “threw” becomes active, exciting the units on the word sense level that represent its possible meanings. This includes the unit PROPEL in the predicate subnetwork, as shown in Figure 2. The VOMIT node is not yet active, since both “threw” and “up” must be active for VOMIT to exceed threshold. Activation then spreads from the PROPEL unit to units that represent the conceptual cases of PROPEL, which we call PROPELAGT, PROPELOBJ, PROPELFROM, and PROPELTO, and in particular, a stable coalition forms including PROPEL, PROPELAGT, SOMEMAN, “man,” and “threw.” We ignore in this example the kinds of connections and auxiliary units required to make this work. Assume for the moment that we have managed to build the connection network in such a way that it 248 Connections are two-way. A o indicates an inhibitory connection. The + signs indicate relative levels of activation. Figure 2. Network Activation after “A man threw”. manifests the desired stable coalition of activity. The next piece of input forces the connection network to adjust to activity that has an inhibitory effect on the existing stable coalition. The activity of the node “up” causes the VOMIT and UP1 units to become active. The units of the model must readjust their potentials in the face of this new data--the continuous output values of the affected units cause a decrease in the confidence of PROPEI. and an increase in that for VOMIT. Since the phrase “threw up” most usually denotes vomiting, the VOMIT unit has a higher activation at this stage than does the one for PROPEI~ (see Figure 3). As before, associated case units VOMITAGT and VOMITOBJ also become active, and a new stable coalition dominates the local (i.e., language processing) network behavior. Finally, the listener perceives the phrase “a ball”, which reinforces the object case node PROPEL,OBJ and inhibits the analogous VOMITOBJ, resulting in a correct interpretation of the example sentence. This activity takes place through the SOMEBAL,l, node, which inhibits VOMITOBJ and excites PROPEL,OBJ. PROPEL thus gets more top-down reinforcement from its filled case nodes than VOMIT does. If the word “dinner” had been heard instead, this same behavior, mediated this time by the FOOD word sense node, would have excited the VOMlT0B.J case node and led to the other interpretation. conc1usi011 We have constructed a simulator for connection networks and early results have guided the design presented here. We are building a network containing 20- 30 polysemic words; our current goal involves making it respond appropriately to different combinations of them. We expect this to lead to the development of rules for generating connection patterns upon the introduction of new words into the network. Future research will ‘l’he nodes for fhrew and up are connected to the VOMIT node by a conjunctive connection. Figure 3. Network Activation after “A man threw up”. 249 Stable coalition c--_---- -- , , --- - /d - c- - ----- ---- . --a--4------ Figure 4. Network Activation after “A man threw up a ball”. investigate the problems of reference, focus of attention, word sense induction, and the structure of higher-level networks involving long-term storage. Acknowledgments This work was supported in part by the National Aeronautics and Space Administration under ‘Grant NSG 7253 and in part by the Office of Naval Research under Grant N00014-82-K-0193. Their support of this basic research is gratefully acknowledged. References PI PI [31 PI PI bl PI Small, Steven I,. (1978), “Conceptual Language Analysis for Story Comprehension”, TR 663, U. Maryland. Gentner, I>. (1982), “Some Interesting Differences Between Nouns and Verbs”, Cognition and Brainv Theory 4, 2, 161-177. Wilks, Y., (1976), “Parsing English II”, in Computational Semantics, Charniak and W ilks (Eds). North-Holland, 155-184. Riesbeck, Christopher K., and Roger C. Schank (1976), “Comprehension by Computer: Expectation- Based Analysis of Sentences in Context,” Research Report #78, Department of Computer Science, Yale University. Small, Steven L. and Chuck Rieger (1982), “Parsing and Comprehending with Word Experts (A Theory and its Realization)“, in Strategies for Natural Language Processing, Ringle and Lehnert (Eds.). Lawrence Erlbaum Associates (to appear). Cottrell, Gary (1982), “Connectionist Parsing”, Technical Report, Department of Computer Science, U. Rochester (to appear). Feldman, Jerome A. and Dana H. Ballard (1982), “Connectionist Models and their Properties,” Cognitive Science, vol. 6 (to appear). 181 PI WI El 11 WI McClelland, James L,. and David E. Rumelhart (1980), “An Interactive Activation Model of the Effect of Context in Perception: Part I”, Report #8002, Center for Human Information Processing, U. California at San Diego. Hinton, Geoffrey E. (1981), “Shape Representation in Parallel Systems,” Proc., Seventh lJCAI, 1088-1096, Vancouver, B.C. Gigley. Helen M. (1982), “A Computational Neurolinguistic Approach to Processing Models of Sentence Comprehension”, COINS TR 82-9, University of Massachusetts. Swinney, David A. (1979), “l.exical Access during Sentence Comprehension: (Re)Consideration of Con text Effects”, Journal of Verbal Learning and Verbal Behavior, vol. 18, 645-660. Seidenberg, M.S., M.K. Tanenhaus, J.M. Leiman, and M. Bienkowski, “Automatic access of the meanings of ambiguous words in context: Some limitations of knowledge-based processing,” Cognitive Psychology, in press. 250
1982
84
183
Why Do Children Misunderstand Reversible Passives? The CHILD Program Learns to Understand Passive Sentences Mallory Selfridge Dept. of EE and CS University of Connecticut Storrs, Ct. 06268 ABSTRACT As children learn language they initially misunderstand reversible passive sentences as if they were active sentences. This error is an impor- tant clue to possible mechanisms by which children learn to understand passives in general. This paper reports on how the CHILD program learns to under- stand passive sentences, initially misunderstanding reversible passives a-s it does so. It presents an explanation of children's performance based on CHILD, and presents a number of predictions which follow from this explanation. I_. Introduction If a three or four year old child is asked which of two pictures corresponds to "The girl is followed by the boy," he will point to the one in which the girl is following the boy, rather than the one in which the boy follows the girl. That is, children of this age will misunderstand such pas- sive sentences as if they were active. Such a sen- tence, in which there are two semantically distinct interpretations, is called reversible. BY age five, the same child will correctly understand re- versible passive sentences. An account of why younger children misunderstand reversible passives would be an important component of an account of their learning to understand passives in general. This paper describes how the CHILD program [7,8,91 learns to understand passive sentences following a progression similar to that children follow, and describes how it misunderstands reversible passives during this progression. It offers an explanation for how children learn passives and why they make this error, and it offers predictions which may confirm or deny this explanation. Children's acquisition of the passive can be described as progressing through a sequence of stages (summarized and simplified from data presented by Bever (1970) and Strohrer and Nelson [lo]). During the first stage, at about age two or three, the child understands passive sentences on the basis of semantic likelihood. During the second stage reversible passives are understood as if ac- cording to active syntax and are thus misunder- stood, while their understanding of semantically unambiguous passives is correct. In the third stage, at about age five, the child understands re- versible passives correctly. The question ad- dressed by this paper is, what mechanism could ac- count for this progression including this stage 2 error? Previous research has not provided a satisfac- tory answer to this question (see [51 for a review of various approaches to modeling child language learning). There are a number of plausible propo- sals in the psychological literature (e.g. 121) however none have been tested in a computer pro- gram. Previous work in computer modeling of language learning has not addressed this issue ei- ther. For example, Anderson 111 explores learning syntactic word classes, fragments of Latin, and verb auxiliaries in generating and understanding declaratives, but does not address the question of comprehension errors during acquisition of the pas- sive. II. The CHILD Pro&ram - -- CHILD is a computer model of the development of children's language conprehension and generation abilities written in Franz LISP and currently run- ning on a DEC VAX 11/780. It begins with world knowledge and language experiences similar to those children receive and learns a 'subset of the word meaning and syntax which children learn. After learning, CHILD can correctly understand utterances which it previously misunderstood. CHILD manifests and offers explanations for a number of charac- teristics of child language learning. It therefore seems appropriate to study CHILD's ability to learn the passive. CHILD's language comprehension process is a version of the CA program [3] which incorporates mechanisms derived from Wilks' [ill preference parsing. CHILD's analysis process combines Concep- tual Dependency (CD) [61 word meanings to form a CD representing the meaning of the entire utterance. It retrieves semantic features associated with par- ticular slots in a CD itself, and also syntactic features associated with those slots but specific to that particular word. It searches a short term memory for word meaning which best satisfies those features, and fills the empty slot with that CD. If it later finds that that CD should fill some other slot, it can retrieve it from the first slot, fill the second slot with it, and find the second best filler for the first slot. In this way it seeks the 251 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. best overall interpretation of an utterance. CHILD's syntactic knowledge is represented us- ing syntactic "features" associated with a slot in a particular word meaning. These features are formed from the positional predicates PRECEDES and FOLLOWS. These relate the position of a candidate slot filler to either the word whose meaning con- tains the slot being filled, a filler of another slot in that same meaning, or a lexical function word. Each slot in the meaning of a word has a col- lection of features describing where in the input a filler is expected to be. In order to understand different voices, CHILD learns and maintains dis- junctive "feature sets" under the meaning of each word. Each is a set of features characterizing one way slot fillers can appear. Feature set selection occurs during understanding by considering which set most successfully characterizes the input. To learn syntax CHILD must acquire syntactic features and also build disjunctive feature sets. CHILD applies its syntax learning procedures to each word whose meaning has empty slots. After hav- ing understood the utterance, CHILD uses a record of the input to find where in the utterance the filler of each such empty slot appeared. It describes this position using PRECEDES and FOLLOWS, and stores this description under the word whose meaning contains the slot being examined, associat- ed with that slot. However, before storage is fi- nal, CHILD must decide whether the set of features comprising the positional descriptions extracted for the slots in a word from a particular input are a new feature set or should be merged with an al- ready existin& feature set. CHILD's strategy is based on a suggestion by Iba [Il. CHILD compares the features extracted from the current input with any existing feature sets. The rule used is that the current set is merged with a previous set only if one set is a subset of the other. Otherwise, the current set is added to the information under the word as a new feature set. 111. Learning to Understand Passives - The following example is edited from a com- plete run of the program during which it learns meanings for all the words it knows and learns their syntax. The example begins after CHILD has learned meanings for the words "fed", "Mom" , Child", and "Rover". The CD representations of CHILD's understanding have been simplified to save space. The first part of the example corresponds to the first stage of children's acquisition of the passive. CHILD knows no syntax for "fed," and thus interprets passives according to semantic likeli- hood. According to CHILD's knowledge, it is more likely that "Mom" is feeding "Child," and this knowledge determines CHILD's understanding. ICHILD hears: "Child was fed by Mom" ICHILD'S understanding is: I(D0 ACTOR (PARHNT~) i LEADTO (INGEST ACTOR (CHILD) OBJECT (NIL))) ICHILD hears: "Mom was fed by Child" ICHILD's understanding is: I (DO ACTOR (PARRNT~) I LRADTO (INGEST ACTOR (CHILD) OBJECT (NIL))) CHILD learns active syntax by being given an example sentence whose interpretation is unambigu- ous. As far as CHILD knows, the only things that can feed anything are people, and thus there is only one interpretation possible for "Mom fed Rover." Given this sentence, CHILD notes the posi- tions of the fillers, and stores them in a feature set under the word "fed." ICHILD hears: "Mom fed Rover" ICHILD'S understanding is: I(D~ ACTOR (PARENTI) I LEADTO (INGEST ACTOR (DOGS) OBJECT (NIL))) ICHILD learns syntax of "fed": I (ACTOR) precedes "fed" I (LEADTO ACTOR) follows "fed" IATTEMPTING MERGE OF CURRENT FEATURES I WITH EXISTING SET IN0 EXISTING FEATURES SETS ICREATING NEW FEATURE SET Having learned active syntax for "fed, " CHILD's performance now corresponds to the second stage of learning to understand passives: it under- stands unambiguous passives correctly, but inter- prets reversible passives as if they were actives. ICHILD hears: "Mom fed Rover" ICHILD'S understanding is: I(D~ ACTOR (PARFNT~) I LEADTO (INGEST ACTOR (DOGS) OBJECT (NIL))) ICHILD hears: "Child was fed by Morn” ICHILD’s understanding is: I(DO ACTOR (CHILD) I muno (INGEST ACTOR (PARENTS) omm (NIL))) ICHILD hears: "Mom was fed by Chiid" I CHILD'S understanding is: ItDo ACTOR (PARFNT~) I LEADTo (INGEST ACTOR (CHILD) OBJECT (NIL))) CHILD progresses to the third stage by learn- ing passive syntax for "fed." CHILD is given an unambiguous passive, and learns the alternative syntactic features which characterize the passive. ICHILD hears: "Rover was fed by Mom" ICHILD'S understanding is: I(DO ACTOR (PARENTS) I LEADTO (INGEST ACTOR (DOGl) OBJECT (NIL))) ICHILD learns syntax of "fed": 1 (ACTOR) follows "fed", follows function word "by" I (LEADTO ACTOR) precedes "fed", I precedes function word "was" 252 IATTEMPTING MERGE OF CURRENT FEATURES I WITH EXISTING SET IMERGE FAILS ICREATING NEW FEATURE SET From these experiences CHILD learns the syn- tactic features which characterize the position of slot fillers when the utterance is passive. It at- tempts to merge these features with the active feature set under "fed" and fails because no subset relation exists between the new features and the active set. It thus creates a new feature set of these features and stores them under the meaning of "fed. I' It now understands reversible passives correctly, as shown below. ICHILD hears: "Mom was fed by Child" ICHILD'S understanding is: I(DO ACTOR (CHILD) ! mm0 (INGEST ACTOR (PARFNT~) OBJECT (NIL))) ICHILD hears: "Child was fed by Mom" ICHILD's understanding is: 1010 ACTOR (PARENTS) I mm0 (INGEST ACTOR (CHILD) OBJECT (NIL))) As shown above, the CHILD program does pro- gress through the same sequence of stages as is re- ported for children. In its first stage it stood passives according to semantic likel%Ei It learned active syntax, and thus in . its second stage, it understood rwersible passives according to active syntax. It then learned passive syntax, which allowed it to correctly understand rwersible passives. JJ. why Are Reversible Passives Misunderstood? To answer this question, consider a more gen- eral question: "How do children learn syntax?" The CHILD program offers the following answer to this more general question: children learn syntax by first noting the position in the input of slot fillers using independent syntactic features and then by storing those features under the meaning of the word which contains those slots. When a child notes features which are not compatible with features known already for a word, he creates a disjunctive feature set. This set contains features characterizing the alternative set of positions the fillers can occur in. Thus CHILD initially learns one such disjunctive set for the word "fed", the active, and later learns a second, the passive. The CHILD model suggests that children misunderstand reversible passives at stage two be- cause they first learn active syntax, and use ac- tive syntax to understand reversible passive sen- tences. They understand non-reversible passives correctly because the semantic requirements over- ride the syntactic features. Children then learn the passive set of features after hearing non- reversible passives, and form a new feature set. When they again hear a reversible passive, their language analysis processes try to use both dis- junctive sets of features, and the passive features are more successful, so they understand the sen- tence correctly. This account of learning to understand pas- sives makes a number of predictions. First, this model predicts that there will be an interval dur- ing which a child will correctly understand some reversible passives yet misunderstand other rever- sible passives. Since the CHILD model proposes that children learn passive syntax individually for each action word, it predicts that there will be a point when a child has learned passive syntax for some words and not for others. Second, this model predicts that children will not learn the passive for a word until either they hear that word used in a non-reversible passive, which allows them to correctly understand the sentence, or they hear the word used in a reversible passive in a situation which allows them to infer its meaning. This is be- cause CHILD needs to understand the meaning of an utterance as a whole correctly before it learns syntax. Third, this model predicts that there will be an intermediate stage in understanding the pas- sive for some word. In this stage, a child will correctly understand some reversible passives in- volving that word yet misunderstand others. The model predicts that at this intermediate stage the child will understand reversible passives in which both interpretations are equally likely, such as "The book was put on the magazine", yet will still misunderstand reversible passives in which the se- mantic likelihood of the syntactically correct in- terpretation is weak, as in "The table was put on the ball." This is because the understanding pro- cess uses preference applied to syntactic and se- mantic features to establish the appropriate in- terpretation, and it takes more experience to build up enough passive features to outweigh the semantic unlikeliness of putting a table on a ball. 253 Acknowledgements Thanks to Dr. Don Dickerson, Marie Bienkowski, and Jason Engelberg for helpful discussions of this work and for commenting on drafts of this paper. References [ll Anderson, J.R. (1981). A Theory of Language Acquisition Based On General Learning Princi- ples Proc. 7th IJCAI, Vancouver, Canada [21 Bever, T. (1970) The Conceptual Basis for Linguistic Structures. In John R. Hays (ed.) Cognition and the Development of Language, John -- - Wiley and Sons, New York. [3] Birnbaum, L. and Selfridge, M. (1981) Conceptu- al Analysis of Natural Language. In Inside Com- -- puter Understanding: 5 Programs Plus Minia- tures, Lawrence Erlbaum Assoc., Hillsdale, New Jersey. [41 Iba, G. (1979). Learning Disjunctive Concepts from Examples. M.I.T. A.I. Memo 9548, M.I.T., Cambridge, Mass. [5] Pinker, S. (1979) Formal Models of Language Learning Cognition, 7:217-283. [6] Schank, R. C., (1973). Identification of Con- ceptualizations Underlying Natural Language. In R. C. Schank and II. M. Colby teds.1 Computer Models of Thought and Language W.H. Freeman and co., San Fransisco. 171 Selfridge, M. (1980). A Process Model of Language Acquisition. Ph.D. Dissertation. Computer Science Research Report 172, Yale University, New Haven, Ct. [El Selfridge, M. (1981a) Why Do Children Say "Goed"? A Computer Model of child Generation. Proc. Third Annual Meetin& of the Cognitive -- Science Society, Berkeley, CA- - [91 Selfridge, M. (1981b) A Computer Model of Child Language Acquisition. Proc. 7th IJCAI, Van- couver, B.C. [lOI Strohrer, H. and Nelson, K.E., (1974). The Young child's Development of Sentence Comprehension: Influence of Event Probability, Non-verbal Context, Syntactic Form, and Stra- tegies. Child Dev., 45:567-576. 1111 Wilks, Y. (1976). Parsing English IT. In Y. Wilks and E. Chamiak (eds) Computational Se- mantics, North-Holland Publishing Co., NY, NY 254
1982
85
184
LEARNING BY Cl-KINKING SUMMARY OF A TASK AND A MODEL Paul S. Rosenbloom and Allen Newell Department of Computer Science Carnegie-Mellon University Pittsburgh, Pennsylvania 15213 Abstract‘ The power law of practice states that performance on a task improves as a power-law function of the number of times the task has been performed. In this article we describe recent work on a model of this effect. The model, called the chunking theory of /earning, is based on the notion of chunhing. A limited version of this model has been implemented within the XapsP production system architecture. When it is applied to a 1023choice reaction-time task (encoded as a set of productions), task performance is improved (measured in terms of the number of production system cycles). Moreover, the practice curves are power law in form. I. Introduction Learning has long been a major topic in Artificial Intelligence. These efforts, however, have primarily focussed on the early stages of learning; that is, on how an initial correct method is learned. Once such a method is acquired. learning stops. With people at least, it is clear that practice continues to improve performance long after the task can be completed “perfectly”. The general rule for these improvements is that the time to perform a task decreases as a power-law function of the number of times the task has been performed: T = BNsa. This basic law __ known as the power law of practice or the log-log linear learning law** -- has been known since Snoddy (1926) [7]. It has recently become clear that it holds over the full range of human tasks [4]. The ubiquity of the power law of practice argues for the presence of a single common underlying mechanism. The chunking theory of learning [4] proposes that chunking [2] -- a concept already implicated in many aspects of human behavior _- is this common mechanism [l]. Currently, the chunking theory of learning is only a macro theory; it postulates the general character of a learning mechanism, and predicts the general course of learning. This paper reports on recent efforts to fill in the micro-structure of this model. The approach we take is to * This research was sponsored by the Defense Advanced Research Projects Agency (DOD), ARPA Order No. 3597, monttored by the Air Force Avionics Laboratory Under Contract F33615-78-C-1551, The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or impked, of the Defense Advanced Research Projects Agency or the US Government. . . Power laws plot as straight lines on log-log paper: log(T) = log(B) + (-a)log(fV). implement a production-system model [3] of the chunking theory in the context of a specific task -- a 1023choice reaction-time task [6]*. This type of task may seem somewhat trivial (and foreign) to Al researchers, but it is well within the mainstream of psychological tasks. It has a rich, yet straightforward, underlying structure in which the application of chunking can be investigated, with results extrapolatable to more complex tasks. II. The Chunking Theory of Learning Informally, the chunking theory of learning proposes that task performance is based on knowledge of patterns that occur in the task, while practice consists of the acquisition of these patterns from tasks already performed. The theory thus starts from the chunking hypothesis: A human acquires and organizes knowledge of the environment by ,,,forming and storing expressions, called chunks, which are structured collections of the chunks existing at the time of learning. The existence of chunks implies that memory is hierarchically structured as a lattice (tangled hierarchy, acyclic directed graph, etc.). A given chunk can be accessed in a top-down fashion, by decoding a chunk of which it is a part, or in a bottom-up fashion, by encoding from the parts of the chunk. Encoding is a recognition or parsing process. This hypothesis is converted into a performance model by adding an assumption relating the presence of chunks to task performance. A Performance Assumption: The performance program of the system is coded in terms of high-level chunks, with the time to process a chunk being less than the time to process its constituent chunks. This assumption reveals that chunks are effective because they assist in overcoming a bottleneck. This places a strong constraint on the architecture within which chunking is implemented. There must be a bottleneck (perhaps a serial one), and there must be a parallel component. The performance assumption implies that performance can be improved by acquiring higher-level chunks. A second assumption is needed to tie down this acquisition process: Learning Assumption: Chunks are learned at a constant rate on average from the relevant patterns of stimuli and responses that occur in the specific environments experienced. *Rosenbloom and Newell [5] will contain a more complete description of this work. 255 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. The final assumption made by the chunking theory of learning ties performance to the structure of the task environment. Task Structure Assumption: The probability of recurrence of an environmental pattern decreases as the pattern size increases. This assumption is trivially true for the typical combinatorial task environment, composed of a set of objects that can vary along a set of dimensions. As the pattern size grows (in terms of the number of dimensions specified), the number of possibilities grows exponentially. Any particular large pattern will therefore be experienced less often than any particular small pattern. III. The Task The task that we employ is Seibel’s 1023-choice reaction-time task [6]. The environment for this task consists of a roughly linear (horizontally) stimulus array of ten lights, and a response array of ten buttons (in a highly compatible one-one correspondence with the lights). On each trial, some of the lights are On, and some are Off. The subject’s task is to respond by pressing the buttons corresponding to the lights that are On. Ten lights. with two possible states for each light, yields 2” or 1024 possibilities (a combinatorial task environment). The configuration with no lights on was not used, leaving 1023 choices. This task has been shdwn to produce power-law practice curves over more than 75,000 trials (Figure 1) [6]. There has been no attempt to model method acquisition in this task, so the model must be initialized with a method for its performance. The control structure of this method can be summarized by the following algorithm. 0.1 I 1 , 10 loo Mean frial number (thousands) Figure 1: Data for 76,725 trials [6] initially the patterns refer to individual lights and buttons. With chunking, the scope of these patterns increase to include multiple lights and buttons. These chunks decrease performance time by allowing more to be done within each iteration of the While loop (the serial bottleneck). The production system architecture facilitates this by allowing a degree of parallelism within each iteration. IV. The Model A complete specification of the model must include a description of both the production system architecture (including the representation and acquisition of chunks) and the performance model for the task as implemented within the architecture. The production system architecture (Xaps2) combines both symbolic and activation based concepts in a highly parallel control structure. Working memory consists of a set of symbolic objects, each with a type, a name, and an optional set of attribute-value pairs. Each component of an object has an associated actrvation value (in [-1, 11). The activation values are used as a focus of attention mechanism (a form of conflict resolution). Positive, negative, and zero activation correspond to active, inhibited, and unattended respectively. On every cycle of the system, every production that is successfully matched to working memory fires its “best” instantiation, as computed from the activations of the working memory elements matched. The effects of these productlon firings are weighted by the activations of those working memory elements and merged together. Working memory is then modified according to this unified specification, yietding the working memory for the next cycle. Productions are free to execute repeatedly on successive cycles, because there is no refractory conflict resolution. The performance model for Seibel’s task is implemented within this architecture by representing both goals and patterns as objects in working memory. There are five goals in the control structure: (1) do the task; (2) do one trial of the task; (3) process one pattern; (4) Find the next stimulus pattern; and (5) execute a response pattern. The While loop is implemented by repeating goal (3) until there are no more lights to be processed. This yields a serial bottleneck at the level of strategic (goal oriented) processing because only one instance of each goal can be actively processed at any instant of time (recall that only the best instantiation of each production is fired on each cycle). Chunks describe how stimulus and response patterns are built up to higher-level patterns (with the primitive patterns defined by the interfaces to the perceptual and motor systems). These patterns are represented by working memory objects. The object describes the location of the pattern (in the stimulus or response field) and the configuration and type of its subparts. A chunk consists of three components: (1) an encoding production; (2) a mapping production; and (3) a decoding production. Encoding productions combine pairs of stimulus patterns (represented in working memory) into higher-level stimulus patterns. These productions are totally data-driven, firing in parallel. Goal (4) selects the next stimulus pattern by the location and activation (which is higher for larger, more well matched patterns) of these patterns. Goal (4) returns this pattern to goal (3) where the mapping production converts it into a response pattern. This response pattern is then passed to goal (5) where it is decoded and executed. The decoding productions are the inverse of the encoding productions, except that they work on response patterns rather than stimulus patterns. The parallel nature of the productions, combined with the hierarchical structure of chunks yields logarithmic encoding and decoding processes. This is the sublinear component of the architecture. In this model, learning consists solely of the acquisition of new chunks. This acquisition is automatic, and not under control of productions. Instead, there is an architectural component which monitors the selection and use of patterns by the performance system. Whenever it finds two successively fixated patterns (within a single trial), it combines them into a new chunk by creating the three productions that process the new pattern. This learning process occurs at nearly a constant rate. V. Results The main experimentation consisted of the sequence of attempts at .building a working model and their occasional debugging runs. The final version of the model has been run successfully on a sequence of nine trials for the left hand (five lights only). The model is too expensive to run it on a large number of trials, so a faster meta-model was implemented. The meta-model simulates the chunks that would be created by the model and the patterns that would be used during performance. It uses this information to estimate the number of cycles that the production system model would require. This estimate is based on a constant 13 cycles per trial, plus 31 cycles for every pattern used (derived from the model’s execution of the nine-trial sequence). Figure 2 shows a simulated practice curve for 72,633 trials. The linearity of this curve in the log-log coordinates in which it is plotted reveals its power-law nature. The model assumes that a chunk is learned whenever possible -- resulting in rapid learning. Within log,(lO) (between three and four) iterations through the task environment (at 1023 trials per iteration), the complete task environment should be learned perfectly. This problem was ameliorated in this simulation by assuming that a chunk is learned with a fixed probability of 0.01 when the opportunity to learn is there. - T = 233/1(‘.‘~ 10 100 Mean trial number (thousands) Figure 2: A Simulation of 72,633 trials (Meta-Model) VI. Conclusion In this paper we have briefly outlined the second step in the development of the chunking theory of learning. By filling in missing details, and implementing the theory within the performance model of a reaction-time task, we have shown that the process of chunking can form the basis of the performance improvements that occur with practice. The model is built out of a production-system architecture, the chunking mechanism, and a task model consisting of a set of productions structured as a goal hierarchy. The results from this simulation verify that learning by chunking produces a practice curve that falls within the same power-law family as the curves of human subjects. References 1. Chase, W. G. & Simon, H. A. “Perception in chess.” Cognitive Psychology 4 (1973), 55-81. 2. Miller, G. A. “The magic number seven plus or minus two: Some limits on our capacity for processing information.” Psychological Review 63 (1956), 81-97. 3. Newell, A. Production systems: Models of control structures. In Visual Information Processing, Chase, W. C., Ed.,Academic Press, New York, 1973, pp. 463526. 4. Newell, A. & Rosenbloom, P. S. Mechanisms of skill acquisition and the law of practice. In Cognitive Skills and Their Acquisition, J. Ft. Anderson, Ed.,Erlbaum, Hillsdale, NJ, 1981, pp. l-55. 5. Rosenbloom, P. S. & Newell, A. Learning by chunking: a task and a model. In Self-Modifying Production System Models of Learning and Development, D. Klahr, P. Langley, & R. Neches, Ed.,, In press. 6. Seibel, R. “Discrimination reaction time for a 1,023 alternative task.” Journal of Experimental Psychology 66 (1963), 215226. 7. Snoddy, G. S. “Learning and stability.” Journal of Applied Psychology 70 (1926) l-36. 257
1982
86
185
Route Finding in Street Maps by Computers and People R. J. Elliott M. E. Lesk Bell Laboratories Murray Hill, New Jersey 07974 Abstract We wrote a computer program which gives driving directions in northern New Jersey. Its data base combines a street map and a tele- phone book, so requests like “give directions from the Lackawanna Diner to the nearest dry cleaner’* (properly specified) can be answered. This problem involves both human factors and algorithmic problems. From the human factors standpoint, what kind of route is best: shortest distance, most concise directions, fewest turns, or some combination of these? And from the algorithmic standard, what is a good shortest-path algorithm: breadth-first search, depth-first search, pre-storing important routes, divide and conquer, or keep a hierarchy of maps with progressively fewer streets? We implemented breadth-first and depth-first search both single-ended and double-ended. Double-ended search was faster in 14 of 16 examples and produced shorter or equal length routes in 13 of 16. Depth-first search was always faster than breadth-first, and produced shorter routes half the time. We also asked eight subjects for directions on 4 maps. The 32 tries at 4 problems produced 22 different routes. People’s strategies typically include finding main roads, and applying divide-and- conquer as well as depth-first search. But it is difficult to character- ize the experimental subjects, since different problems caused them to try different search algorithms. Figure 1. Downtown Chatham, NJ Introduction. British studies show that 4% of all driving is wasted, done by people who are either completely lost or just not following the best r0ute.l Experimentation with routefinding is both a practical and an interesting heuristic problem. We have a street map of Morris and Essex counties in New Jersey,2 a telephone book covering parts of the area, and a program to give driving directions. A sample map of Chatham, NJ is shown in Figure I; a map of part of Madison, NJ including some business names is shown in Figure 2. We can find routes between two places where the starting place is identified as: (a) a street number and name; (b) the intersection of two streets; or (c) the name of a business listed in our Yellow Pages with an identif- iable address; and the ending place is identified in any of these ways or as (d) “nearest X” where X is the name of a Yellow Pages category. Directions can be printed, or a map showing the route drawn. The large and accurate data base eliminates the need to han- dle “fuzzy” positions as considered by McDermott3 but instead forces us to worry about the implementation of the routefinder. At first, we actually found the shortest route. However, people shown minimum-distance routes often recoiled in horror; they had far too many turns. In our area, on a trip of any length, the pro- gram would usually make turns at least every mile, having found some way to save a few yards. We introduced a criterion that each right turn cost l/8 mile and each left turn cost 114 mile. With these extra penalties, minimum-distance routes look reasonable to users. Figure 2. Downtown Madison, NJ 258 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. Some sample routes, with drawn maps, are shown in Figures 3 and 4. In Figure 4, note that the program has gone an extra block NW on South Street in order to avoid making the double turn on Maple St, which saves less than the 3/8 mile it would be charged. In addition to handling distance minimization, the program also has facilities for handling one-way streets and limited access highways. We are currently extending the cost algorithm to include assumption of different rates of speed on different streets. Computer algorithms. Mathematically, this problem is familiar as the shortest-path prob]ems4,5v6J+8,9 1 n the substantial literature, a nondirectional breadth-first search is normally recommended. The reason is that most of the mathematicians are trying to minimize the worst-case time, and the worst case includes non-planar graphs with short edges between random points in the graph. In such a case, there is not much advantage to “directional” searching, even when directions can be defined. Furthermore, many of the graphs being used in these papers have few nodes and many edges. Our graphs, however, are quite different. Street maps are nearly always planar (some parts of San Francisco being exceptions), and have Euclidean distances and other nice properties like triangle inequalities. More important, there are a great many nodes while edges are sparse. In our Morris and Essex street map, there are 40,000 nodes and 65,000 edges; of these 39,000 nodes and 51,000 edges involve streets (rather than rivers, town boundaries, railroads and the like). (To compare with something more familiar to people, Manhattan has 5000 nodes and 8000 edges; 4000 nodes and 7000 edges involve streets.) As a result, algorithms which touch almost every street node are too slow in our case. Even considering all the street nodes that lie in between the source and destination would prevent rapid response. (Although we retain the railroad and river information so that important landmarks are presented in directions or on drawn maps, they are not searched in route-finding). We have compared the basic breadth-first and depth-first search algorithms, using both double-ended and single-ended search- ing. In breadth-first search, we do not spread out omnidirectionally; instead the search is biased in the right direction by penalizing points based on their distance from the destination. In depth-first search. we follow streets as long as the distance on this street is decreasing. The single-ended searches go from the starting point to the destina- tion; the double-ended searches advance from both points until a meeting point in the middle is found. Table 1 shows the rough results, averaged over ten trips in New Jersey and California. Table 1. Computer Routing Method Single-end Double-end Nodes Distance Nodes Distance Touched (miles) Touched (miles) Breadth-first 904 7.65 784 7.66 Depth-first 509 8.55 450 7.88 As can be seen, depth first search touches substantially fewer nodes (which means that it runs on average 45% faster) and yet finds, in the two-way case, almost as good routes. Double-ended searching is better than single-ended for either algorithm. Other algorithms that might be considered: (a) Divide and con- quer. It is faster to find a route from A to B and then from B to C than to find a route from A to C. Thus, a strategy of identifying points likely to be in the middle of a route would substantially sim- plify searching. Divide and conquer would also be effective at deal- ing with obstacles, such as rivers or railroads. Unfortunately the obvious algorithm for picking intermediate points (identifying street ’ hainfield AG $ iamond Hill Rd Figure 3. Route from Lawrence Drive, Berkeley Heights, to Hall Road, Chatham Figure 4. Route from the Lackawana Diner to Oak St. and MacCulloch Avenue, Morristown. 259 Junctions roughly midway between the endpoints) is horrible on rec- tangular street grids; it would follow a staircase with many turns. (b) Pre-stored routes. In familiar areas, it is clear that people remember particular trips and find new routes by relating them to existing trips. Although such an “expert” system could probably be built it seems unnecessary, since people can find routes in areas they’ve never seen before. (c) Hierarchical search. This is probably the right answer: first go to important streets, then travel on a back- bone map of important streets until you get to the destination, and then stop. As will be seen, this shares many elements with what people do, and is reasonably compatible with computer requirements. Human algorithms. Since the mathematical literature was not enough to solve the problem, we tried looking at what people do. Two experiments were run; in a preliminary task 15 staff members were presented with an unfamiliar routing problem; then in a more systematic protocol 8 staff members each got the same 4 problems. All problems were in unfamiliar areas (Pasadena, Dallas, San Mateo, and England). As mentioned, people varied quite widely in the routes they selected. The subjects were not specifically urged to find either the shortest or the fastest or the simplest route, but to do whatever they would do normally if asked for information. Table 2 shows some statistics on human route finding. Problem Pasadena Dallas San Mateo England Table 2 Variations in Human Solutions No. solutions (8 subjects) 6 8 4 4 Route lengths (miles) 4.2-6.2 7.8-9.6 3.5-4.5 18.5-20.6 Solution times (seconds) 30-432 32-183 32-l 17 17-73 The total time taken by the subjects to solve all four route problems ranged from 133 seconds to 678 seconds, with a mean of 302 seconds. In general, the longer routes represent greater use of high- speed roads, even if not going in the same direction. We riote that in Dallas, where using Interstate 35 added 18% to the trip length, two people used it. In Pasadena, two people used the Foothills Freeway, with a cost of 23%. In San Mateo, the Bayshore Freeway could have been used at an added trip cost of 27%; no one did so. The typical strategy was: first scan quickly for important roads going in the right direction. Then do depth first search, single ended, going to and along the important roads. This combines divide-and-conquer and depth-first search on the sub-problems. The first part of the procedure resembles the use of intermediate goals by Kuipers’ subjects.lO In general a “first-hit” strategy was used: as soon as one route was found people reported success. In 4 cases of the 32, though, people tried again and found a second route. People seem to have some sense of how long a trip “ought” to take, and they search longer, back-up, or use more breadth-first searching when they think they have found an unacceptable result. Two people did one double-ended search. Breadth-first search was used in com- plex cases to find the way to the expressway. Typically, about half the time used in finding a route was spent looking for major roads and otherwise in “planning;” the other half was spent tracing out the route. The printing on the map affects the route. People depend on color and presentation, not labels, to find main streets. The English map was condemned as unfamiliar (and one person tried at first to go along a railway). The American street maps (all Rand McNally or Gousha) were criticized as not indicating expressway entrances clearly enough. People always began searching at the starting point. Thus, if the starting point was on a main road, people would always go along it first. A destination near a main road was less likely to cause people to use that road. On the Pasadena trip, the average human took 98 seconds to find a route 5.4 miles long, requiring 3.3 turns (even excluding one particularly slow human, the remaining seven subjects took an aver- age of 56 seconds). The computer, using two-ended depth-first search, took 12 seconds of VAX 111750 CPU time to find a route 4.2 miles long, requiring 3 turns. Figures 5 and 6 show some human and computer routes, respectively. Thus, in this example (represent- ing people unfamiliar with an area), using computer routing produces a saving of 24% in gasoline and 10% in the chance of getting lost. However, we should note that a human expert (someone who went to graduate school in Pasadena) took 24 seconds to find a route of 4.5 miles involving 3 turns; and a human grandmaster (born and brought up in Pasadena) found a similar trip without even looking at a map. Figure 5. Human routes in Pasadena. Routes 1 and 2 were each followed by two subjects; routes 3 through 6 were each chosen by one subject. 260 Conclusions. On one level, we have a route-finding program. Depth-first search seems to be most effective. Work continues on smoothing out the interface so a greater variety of requests can be handled (e.g. giving synonyms for category names) and speeding up the routing. But on another level, note that in this practical problem, an almost exact mathematical analogy turns out to have mostly unproductive literature solutions. Since these solutions are valuable in other kinds of shortest path problems, it emphasizes the importance of the prob- lem domain in picking algorithms. References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. Computetworld, p. 7 (Dec. 22, 1980). Geographic Base File GBFIDIME. 1980: Technical Documenta- tion, U. S. Department of Commerce, Data Users Services Division, Washington, D. C. (1980). D. McDermott, “A Theory of Metric Spatial Inference,” Proc. First National Al Conference, Stanford, California, pp. 246248 (1980). R. W. Floyd, “Algorithm 97: Shortest Path,” Comm. Assoc. Comp. Mach. 5, p. 345 (1962). A. V. Aho, J. E. Hopcroft, and J. D. Ullman, The Design and Analysis of Computer Algorithms, Addison-Wesley, Reading, Mass. (1974). pages 207-209 D. B. Johnson, “Efficient Algorithms for Shortest Paths in Sparse Networks,” J. Assoc. Comp. Mach. 24(l), pp. I-13 (1977). C. Witzgall, J. F. Gilsinn, and D. R. Shier, ‘Shortest paths in networks,” pp. 171-255 in Case Studies in Mathematical Model- ing, ed. W. E. Boyce, Pitman, Boston (1981). S. E. Dreyfus, “An Appraisal of Some Shortest-Path Algo- rithms,” Operations Research 17(3), pp. 395-412 (May-June 1969). R. E. Tarjan, “Fast Algorithms for Solving Path Problems,” J. Assoc. Comp. Mach. 28(3), pp. 594-614 (1981). B. Kuipers, “Modeling Spatial Knowledge,” Cog. Sri. 2(2), pp. 129-153 (1978). Route I: Depth-first search. Route 2: Breadth-first search Figure 6. Computer routes in Pasadena. (3.9 miles). 261
1982
87
186
PLAN UNDERSTANDING, NARRATIVE COMPREHENSION, AND STORY SCHEMAS" William F. Brewer Department of Psychology University of Illinois 603 East Daniel Street Champaign, IL 61820 ABSTRACT This paper proposes that a comprehensive theo- ry of stories must include components that deal with: (a) plan understanding, (b) narrative com- prehension, and (c) the unique structural and af- fective aspects of the subclass of narratives that are stories. A. Introduction This paper outlines a position on the struc- ture of theories of stories and then examines re- cent work in artificial intelligence in terms of this framework. The paper will also refer to em- pirical findings from psychology where they are relevant to the theoretical issues at hand. in terms of underlying plans. Theories of narra- tive comprehension attempt to give an account of how narrative discourse is processed to produce an underlying cognitive representation. Theories of stories attempt to account for the unique proper- ties of the subclass of narrative discourses that are stories. B. Plan Understanding Since most stories deal with the goal-directed actions of one or more characters, an adequate theory of stories should include a plan-understand- ing component to explain how the reader recon- structs the character's intentions from the de- scribed actions (c.f. [u, DJ). Researchers from artificial intelligence have made a number of important contributions in this area. Schmidt, Sridharan and Goodson fi7] devel- oped a system (BELIEVER) that attempts to account for how an observer of an action carries out plan understanding. Both Meehan's work fi5] and Wilen- sky's work L21] have been directed at developing a theory of stories qua stories (c.f. fi5] p. 115; 0117 p. 65); however? in practice Wilensky's program (PAM) focuses almost completely on the problems of plan understanding and Meehan's pro- gram (TALE-SPIN) focuses almost completely on the problem of plan production in three-dimensional space. Overall, this work on plan recognition has probably been the area where artificial intelli- gence has made the greatest contribution to the de- velopment of a comprehensive theory of stories. None of the other disciplines studying stories (linguistics, literary theory, psychology) have given as detailed and as analytic an account of the processes involved in going from observed actions to postulated plans as has the work in artificial intelligence. A recent series of emp$rical studies b-7, /i.2] emphasizes the importance of plan comprehen- sion as a component of an overall story theory and suggests the need for reinterpretation of some of the psychological theories of stories. Lich- tenstein and Brewer fi2_7 showed subjects video- tapes of an actor carrying out a goal-directed ac- tion (e.g., setting up a slide projector) and then tested the subjects' recall of the actions. We found that goal-directed actions were recalled better than non-goal-directed actions; that ac- tions higher in the plan hierarchy were recalled better than actions lower in the hierarchy; and that actions presented out of canonical plan pos- ition shifted back to canonical position in recall. These results are essentially the same as those found in psychological studies of story recall [14] , [18-7, [2QT that have been taken to support the story grammars as theories of stories. The replication of the standard story recall findings with recall of a simple observed goal-directed action strongly suggests that the story grammar findings are not due to story structure, but are predominately due to the effects of plan schemas that subjects imposed on the actions of the char- acters in the passages they heard. Thus, analy- sis of recent work in psychology and artificial intelligence suggests that plan understanding is an important component of a theory of stories. C. Narrative Understanding A narrative is a text that describes a se- quence of events (and since most narratives are about people they tend to be descriptions of goal-directed actions). Thus, in constructing a theory of narratives one must carefully distin- guish between the actions described in a narra- tive (event structure) and the linguistic repre- 262 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. serltation of the actions in the narrative (dis- course structure). This is a traditional distinc- tion in literary theory [7], /?l9/ and provides, for text structure, the same kind of theoretical power that the distinction between surface struc- ture and deep structure provides for the analysis of sentences. The author of a narrative text must make de- cisions about: (a) how much of the information from the underlying event sequence is to be placed explicitly in the text, and (b) how to order the information in the text (for a more detailed dis- cussion see [37). 1. Narrative completeness - In ordinary nar- ratives much of the information about the underly- ing events is omitted from the text, and thus a theory of narrative comprehension must include components that allow the reconstruction of the underlying event information from the text. This problem (usually described as the inference prob- lem) is one that has attracted much attention by researchers in artificial intelligence (e.g., fl6-7) and has led to much interesting research. 2. Narrative organization - The issue of narrative organization has attracted considerable work in literary theory /197, and psychologists have tried to deal with some aspects of the issue by adding a transformational component to story grammars fiO]. However, there is a serious gap in recent work on this topic in artificial intelli- gence. There has been much research on narrative understanding at the sentence level D32, but lit- tle concern with global narrative structures. Thus, current models in artificial intelligence cannot deal with ordinary phenomena such as flash- backs and flashforwards. Meehan's program (TALE- SPIN) primarily focuses on the problem of plan production, but it does contain one device that functions at the level of narrative organization. When two characters are carrying out a sequence of events at the same time, the program gives the en- tire event sequence for one character and then shifts to the other character (rather than inter- twining the events in the narrative). Clearly theories in the area of artificial intelligence need a component to deal with narrative organiza- tion and, to the degree to which they are intended to map psychological processes, they must give an account of the ease or difficulty of comprehension of different narrative organizations. D. Story Schemas 1. Stories vs. narratives - A number of re- cent researchers have argued that stories must be distinguished from the larger class of narratives ,/I/, fi], ,&I, [9], LILT. These theorists share the intuition that stories have special properties and that theories of stories must deal with con- structs such as conflict, suspense, and interest- ingness. However, not all recent story theories agree that stories must be distinguished from nar- ratives. Theories in the story grammar tradition fl4] and simple plan-based theories 127 do not make such a distinction, and both classes of theo- 263 ries would consider a narrative des one tying their shoe to be a story. cribing some- 2. Difficult goal theories - Most of the re- cent artificial intelligence research on stories has taken a clear position in favor of distinguish- ing stories from narratives (e.g., [15], l-211, my) . The general position adopted by these theo- rists is that stories are the set of goal-directed actions in which a character faces a difficult goal and has trouble fulfilling this goal. This approach captures the intuition that stories have special properties and gives a partial account of issues such as conflict and suspense. However, Brewer and Lichtenstein [j] have make strong arguments against this position, We argue that this approach places the crucial fea- tures of what makes something a story in the wrong locus. It is what the reader feels that is cru- cial, not the difficulty the character is having. For man-; stories the two approaches agree, since when the character is facing severe difficulties the reader will be in suspense. However, there are narratives which act as a crucial test of the two views: consider a narrative in which the character is carrying out some mundane plan (for example, driving home) and facing no problems reaching his goal, yet the reader is in suspense because the reader has some significant informa- tion (for example, that the gas line is leaking and the car may explode). The difficult-goal approach would classify this text as a non-story, whereas the reader-affect approach would classify this text as a story. In Brewer and Lichtenstein ~~~ we obtained story intuition data from a group of subjects showing that this type of narrative is, in fact, considered to be a story, thus sup- porting the reader-affect position, A structural-affect theory - In several we have attempted to de- velop a theory of the reader's story schema. We have assumed a theory of plan understanding and a theory of narrative comprehension and have at- tempted to deal with the unique characteristics of stories. We argue that stories are a subclass of narratives which have entertainment as their primary discourse force. We postulate that the entertainment function is carried out by produc- ing certain affective states in the reader (e.g., suspense, surprise, curiosity), and that the af- fective states are produced by the inclusion of certain types of events in the event structure of narratives and by particular arrangements of the discourse with respect to the underlying events. This approach has a number of important conse- quences: (a) It gives a detailed account of the differences between stories and simple narratives. (b) It captures the intuitions that a story the- ory ought to include constructs such as suspense and surprise. (c) It suggests that theories of stories should not be evaluated with comprehen- sion or memory criteria, but with story intuitions, story-liking judgments, and affect judgments. While our particular theory of stories may be in- correct, it appears to us that any adequate theo- ry of stories is going to have to deal with the issues raised by this approach. Psychology, 12, (1980) 412-445. E. Conclusion Overall, it seems that the distinctions out- lined above,between theories of plan understand- ing, theories of narrative comprehension, and the- ories of the story schema,make for conceptual clarity and, when applied to the recent work on stories in artificial intelligence, provide a number of suggestions for future development. REFERENCES de Beaugrande, R., and Colby, B.N. "Nar- rative Models of Actions and Interaction." Cognitive Science, 3, (l979) 43-66. Black, J.B., and Bower, G.H. "Story Under- standing as Problem-Solving." 9, Poetics, (1980) 223-250. Brewer, W.F. "Literary Theory, Rhetoric, and Stylistics: Implications for Psychol- ogy . " In R.J. Spiro, B.C. Bruce, and W. F. Brewer (Eds.), Theoretical Issues in Reading Comprehension: Perspectives from Cognitive Psychology, Linguistics, Artifi- cial Intelligence, and Education. Hills- dale, N.J.: Erlbaum, 1980. Brewer, W.F., and Dupree, D.A. "Use of Plan Schemata in the Recall and Recognition of Goal-Directed Actions." Journal of Experimental Psychology: Learning, Memory and Cognition, in press. Brewer, W.F., and Lichtenstein, E.H. "E- vent Schemas, Story Schemas, and Story Gram- mars." In J. Long and A. Baddeley (Eds.), Attention and Performance IX. Hillsdale, N.J.: Erlbaum, 1981. Brewer, W.F., and Lichtenstein, E.H. "Stor- ies are to Entertain: A Structural-Affect Theory of Stories." Journal of Pragmatics, in press. Chatman, S. Story and Discourse. Ithaca, N.Y. : Cornell University Press, 1978. van Dijk, T.A. "Action, Action Description, and Narrative." New Literary History, 6, (1975) 273-294. van Dijk, T.A. "Philosophy of Action and Theory of Narrative." Poetics, 5, (1976) 287-338. Johnson, N.S., and Mandler, J.M. "A Tale of Two Structures: Underlying and Surface Forms in Stories." Poetics, 9, (1980) 51-86. Kintch, W. "Learning from Text, Levels of Comprehension, or: Why Anyone Would Read a Story Anyway." Poetics, 9, (1980) 87-98. Lichtenstein, E.H., and Brewer, W.F. "Mem- ory for Goal-Directed Events." Cognitive Mandler, J.M. "A Code in the Node: The Use of Story Schema in Retrieval." Dis- course Processes, 1, (1978) 14-35. Mandler, J.M., and Johnson, N.S. "Remem- brance of Things Parsed: Story Structure and Recall." Cognitive Psychology, 9, (1977) 111-151. Meehan, J.R. "The Metanovel: Writing Stor- ies by Computer." New Haven, CN: Yale University Department of Computer Science, Research Report No. 74. 1976. Schank, R.C. Conceptual Information Process- ing. Amsterdam: North-Holland, 1975. Schmidt, C.F., Sridharan, N.S., and Goodson, J.L. "The Plan Recognition Problem: An Intersection of Psychology and Artificial Intelligence." Artificial Intelligence, 11, (1978) 45-83. Stein, N.L., and Nezworski, T. "The Effect of Organization and Instructional Set on Story Memory." Discourse Processes, 1, (1978) 177-193. Sternberg, M. Expositional Modes and Tem- poral Ordering in Fiction. Baltimore: Johns Hopkins University Press, 1978. Thorndyke, P.W. "Cognitive Structures in Comprehension and Memory of Narrative Dis- course." Cognitive Psychology, 9, (1977) 77-110. Wilensky, R. "Understanding Goal-Based Stories." New Haven, CN: Yale University Department of Computer Science, Research Report No. 140. 1978. Wilensky, R. "tiints: A Theory of Story Content." Berkeley, CA: University of California, Electronics Research Laboratory, Memorandum No. UCB/ERL M80/17, 1980. Winograd, T. Understanding Natural Lan- guage, New York: Academic Press, 1972. 264
1982
88
187
AFFECT PROCl&SING FOR NARRATIVES ’ Michael G. Dyer Computer Science Department Yale University New Haven, CT 06520 ABSTRACT This paper presents a theory of AFFECT processing in the context of BORIS [Dyer, 19821 [Dyer, 1981a], a computer program designed to read and answer questions about complex narratives. Here, ‘complexn entails the coordination, application and search of many distinct sources of knowledge during both comprehension and question answering. This paper concentrates only on those structures and processes which interact with affect situations. The affect component in BORIS is not a separate module, but rather a series of structures and processes which arise as various lexical items are encountered during narrative comprehension and question answering [Dyer, 198lb]. 1. Introduction Descriptions of characters’ emotional reactions to encountered situations occur with frequency in narratives of any complexity. Why is this the case? What effect does this have on the way readers process narrative text? Examples in this paper will be taken from portions of a narrative read by BORIS [Lehnert, Dyer, et al., 19821. Affective segments are highlighted in boldface: DIVORCE-l Richard hadn’t heard from his... roommate Paul for years... When a letter finally arrived... Richard was anxious to find out how Paul was. Unfortunately, the news was not good. Paul’s wife Sarah wanted a divorce. She also wanted the car, the house, the children, and alimony. Paul . . . didn’t want to see Sarah walk off with everything... he was hoping for a favor from the only lawyer he knew. Paul gave his home phone number in case Richard... could help. Richard eagerly picked up the phone... After a brief conversation, Paul agreed to have lunch with him... He sounded extremely relieved and grateful. . ..as Richard was driving to the restaurant he barely avoided hitting an old man on the street. He felt extremely upset... and had three drinks at the restaurant. When Paul arrived Richard was fairly drunk. After the food came, Richard spilled... coffee on Paul. Paul seemed very annoyed by this so ‘This work supported in part by ARPA contract N00014-75-C-111 and NSF contract IST7918463. Richard clothes. offered to drive him home for a change of W’hen Paul walked into the bedroom and found Sarah with another man he nearly had a heart attack. Then he realized what a blessing it was. With Richard there as a witness, Sarah’s divorce case was shot. Richard congratulated Paul and suggested that they celebrate at dinner. Paul was eager to comply. DIVORCE-l requires, minimally, the following abstract constru& to represent its conceptual content: object primitives [Lehnert and Burstein, 19791, physical states, script.s, plans, goals [Schank and Abelson, 19771, interpersonal relationships, settings [Dyer and Lehnert, 19821, abstract themes [Dyer, 1981c], and in addition, a theory of affect. 2. Representing Emotions In BORIS, emotional reactions are represented by a knowledge const,ruct called an AFFECT. It is used to trace the emotional states of narrative characters at the moment that emotional reactions are encountered. Each AFFECT is constructed out of six basic components: 1. STATE -- This component holds a primitive state of emotional arousal which is either positive (POS) or negative (NEG). In general, a NEG emotion signals a goal failure while a POS emotion indicates goal success. 2. CHAR -- Indicates which character in the narrative is feeling the primitive emotion. 3. G-SITU -- Refers to the goal situation which gave rise to the primitive emotional state. The major information an AFFECT carries is a goal situation which describes the success, failure, or activation of a goal, along with other information, such as agency and expectations. These are discussed below. 4. TOWARD (optional) -- Primitive emotions can be directed at another character. For example, “anger” and “guiltn can arise from goal situations involving other characters. 5. SCALE (optional) -- Characters can feel an emotion at various levels of intensity. In BORIS, affect intensity indicates the importance of a goal to the character involved. 6. E-MODE (optional) -- Refers to the expectations characters have about the likely future outcome of a goal. From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. AFFECTS are important is because they implicitly convey information about goals [Wilensky, 19781. Although AFFECTS do not mention specific goals, they do describe abstract goal situations which are currently active. In BORIS “joy” indicates that a character has achieved some goal, while “sad” indicates a goal failure. More complicated goals situations involve the intervention of other characters (as in the case of agency). Hence “grateful” implies that some goal of x has been achieved by means of another character y, while “anger” directed by x toward y implies that y has caused a goal failure for x. In each case, affective descriptors say nothing about specific goal content, but instead serve to reveal abstract goal circumstances. A portion of the BORIS lexicon appears below: lexicon ------- happy joyous glad unhappy upset sad grateful thankful annoyed angry furious hopeful fearful worried surprised shocked relieved allayed BORIS Affect Lexicon affect info. goal situation ------------ -------------- (AFFECT (a) Goal of x STATE (POS) achieved CHAR G-SITU '(a)) (AFFECT (b) Goal of x STATE (NEG) thwarted or CHAR G-SITU :b)) suspended or preservation goal active -(AFFECT- - - - - -(c) y caused goal STATE (POS) situation (a) CHAR G-SITU fc) to occur TOWARD y) (AFFECT STATE (NEG) (d) y caused goal situation (b) CHAR G-SITU ;d) to occur TOWARD y) -(AFFECT- - - - - -(e) Goal of x STATE (POS) is active CHAR G-SITU ye) E-MODE (EXPECTED)) (AFFECT STATE (NEG) (f) P-goal (i.e. preservation CHAR G-SITU "(f, or maintenance E-Mom (EXPECTED)) goal) is active (AFFECT STATE pos/neg (g) A goal is achieved or CHAR G-SITU (') thwarted E-MODE 7 UNEXPECTED)) -(AFFECT- - - - - -(h) Situation (f) STATE (POS) was active but CHAR G-SITU ;h)) p-goal failure avoided disappointed (AFFECT STATE (NEG) CHAR G-SITU "ci) proud smug -(AF;ECT- - - - - STATE (POS) CHAR x G-SITU (j) TOWARD y) guilty (AFFECT ashamed STATE (NEG) embarrassed CHAR regretful G-SITU ;k) TOWARD y) (i) Situation (e) was active but goal is now thwarted (j) goal of y achieved by x (k) goal of y thwarted bY x More complicated AFFECTS are represented by partial decomposition into more basic AFFECTS. For instance, Ugratitude” refers to goal situation (c) in the lexicon, which itself accesses the simpler AFFECT of “joy” (goal situation (a)). So “gratitud e” represents an abstract situation in which x feels “joy” due to the fact that y has achieved x’s goal. “Relief” and “disappointment” involve more complex goal situations than “hope” or “fear”. In BORIS, “relief” indicates that a narrative character x at one point had an active goal, which x expected would fail. Instead, the goal either did not fail, or was then actually achieved. So “relief” (h) refers to the AFFECT structure described by “worry” (f). Conversely, “disappointment” is represented by an initial expectation that a goal would be achieved (e) which then actually ends up being thwarted (i). 3. AFFECTS in Processing Affective descriptions influence processing by a) generating expectations, b) triggering inferences, c) aiding parsing, and d) helping construct memories. Consider the first sentence in the third paragraph of DIVORCE-l: Richard eagerly picked up the phone and dialed. Subjects infer that Richard will be Paul’s lawyer by the time they finish reading this sentence. Yet the same inference would not be made if the story had read: Richard morosely picked up the phone and dialed. The expectation that Richard will help Paul is supported by the word “eagerly”. But how? BORIS contains a number of affect interpretation rules and affect response rules. Interpretation rules map certain lexical items into AFFECTS. “Eagerly” is interpreted by the rule: afl : If x is ACTOR of an ACT which is modified by “eagerly” Then interpret as: “x feel POS affect while x do ACT” while the response rules relate these AFFECTS to other knowledge structures. Rule arl below: arl: If x ASK y to serve as agent for x and y has POS affect Then y has the goal of being x’s agent. combines Paul’s request for a lawyer with the affect arising from “eagerly” to lead BORIS to infer that Richard wants to be Paul’s lawyer. AFFECT information also aids in parsing tasks. Consider again the third paragraph of DIVORCEI: Richard eagerly picked up the phone... After a brief conversation, Paul agreed to have lunch with him... He sounded extremely relieved and grateful. Who does Lhen refer to? People usually assume that “he” refers to Paul. After all, it is Paul (not Richard) who is in trouble and who has asked Richard for help. So resolving “he” requires knowing: a) the goal circumstances of both Paul and Richard, and ‘b) what “grateful” and “relieved” mean in goal terms. When BORIS sees “he” is bound to the CHAR slot in an AFFECT, it searches for an active character whose goal situation matches the G-SITU associated with that AFFECT. Consequently, Paul is chosen as the referent. Empathy Table Norm violations initiate special processing. For example, when BORIS reads: Paul’s salary from the school was very small. Y has goal Y has goal interpersonal FAILURE SUCCESS theme OPT) +------------+------------+-------------------+ I x feels NEG 1 X 6;;'s 1 IPT-FRIENDS(X,Y) 1 (goa I accord) +------------+------------+-------------------~ I x feels POS I X feels i IPT-ENEMIES(X,Y) I NEG (goal conflict) I +------------+------------ +-------------------+ it builds the following structure: (OCCUPATION ACTOR Paul CLASS (TEACHER) LOCATION (INSTITUTION FUNCTION (EDUCATION)) Situations of negative empathy arise from the interpersonal theme of IPT-ENEMIES. These situations are signaled by such words as “jealous”, “spiteful” and “resentful”, and are represented in BORIS by a knowledge construct called an ACE (Affect as a Consequence of Empathy): SALARY (SCALE IS ( <NORM >)) As BORIS constructs <NORM, it tries to understand its significance by checking for a current AFFECT associated with the ACTOR in this structure. The following deviation rule is used to infer that Paul has an active P-FINANCES (preserve finances) goal: ACE theme --- --^-- commiserate x MTRANS condole (IPT-FRIENDS x y) ~~aty[goal failure (y) causes: x feel NEG 3 adl: If x is the ACTOR in a knowledge structure which has an attribute (ATTR) and ATTR is >NORM or <NORM and the current AFFECT of x is NEG Then find a preservation goal G enabled by ATTR and assume that G is active felicitate x MTRANS (IPT-FRIENDS x y) congratulate TO Y that [goal success (y) causes: x feel POS I envy jealous spiteful x MTRANS TO y/x (IPT-ENEMIES x y> that [goa I success (y> causes: x feel NEG ] Rule ad1 works in other cases. For instance: gloat x MTRANS TO y/x (IPT-ENEMIES x y> that [goal failure (y) causes: x feel POS I Mary was worried. She was overweight. Here BORIS will infer that Mary has a P-HEALTH goal active since P-HEALTH is enabled by maintaining normal weight. In contrast, if the sentence had read: Mary was happy that she was overweight. ACES are important because they capture empathetic aspects of interpersonal relations. At the end of DIVORCE-l Richard congratulates Paul. BORIS uses ACE FELICITATION to interpret this as Richard telling Paul that Richard feels happy because Paul has won his divorce case. then no preservation goal would be inferred. 5. AFFECTS and Episodic Memory 4. Empathy and Interpersonal Themes Characters often react emotionally to what other characters have experienced. Such reactions are empathetic, and help specify relationships between characters. Consider the following sentence from paragraph 2 of DIVORCE-l: “Unfortunately, the news was not good.” How is “unfortunately” to be processed? Here, it is clear that “unfortunately” should mean: Subject protocols indicate that people can not directly access events when Riven the description of a character’s AFFECT. For example, the question: “When did Paul feel angry?” is very difficult for most people to answer. Subjects try to answer this question by chronologically ‘scanning’ through the story until they encounter an event in which Paul might have become angry. Richard is negatively aroused because of some goal situation affecting Paul. But what if the passage had read: “Fortunately, the news was not good.” Now, the natural effect upon the reader is to assume that Richard does not like Paul. The same result would be achieved by: “Unfortunately, the news was good.” but not by: “Fortunately, the news was good.” What is going on here? These protocols are in keeping with experimental work by Robinson on autobiographical memory [Robinson, 19761 in which subjects experienced much more difficulty in recall when prompted wit)h affect terms than when prompted with either object or activity terms. Consequently, AFFECTS do not themselves hold memories in BORIS. Instead, they are usually reconstructed from instantiated goal situations and memory structures referring to them. Next to goals and ACES, the structures most closely related to AFFECTS are Thematic ‘Abstraction Units (TAUs) [Dyer, 19821 [Lehnert, Dyer, et al., 19821 [Dyer, 1981c]. In each case, Richard is being emotionally influenced by the success (or failure) of Paul’s goals. BORIS’s ability to make a correct interpretation is based on its knowledge of interpersonal themes (IPTs) as they relate to empathetic reactions. These relationships appear below: TAUs represent a class of thematic structures, which are often expressed in natural language by adages, such as ‘a friend in need is a friend in deed”, or “caught red-handed”. For instance, whenever a character is relying on a plan requiring secrecy to achieve a goal and this plan fails as the result of an eye-witness, TAU-RED-HANDED is instantiated. BORIS instantiat.es this TAU in DIVORCE-l when Paul catches Sarah in bed with another man. Once memories have been built with the aid of AFFECTS, BORIS is able to answer affect-related questions: Ql: How did Richard feel when the letter appeared? RICHARD FELT GLAD BECAUSE PAUL AND HE WERE FRIENDS. Q2: Why was Paul upset about the d PAUL AND SARAH WERE FIGHTING OVER THE FAMILY POSSESSIONS. ivorce? Q3: How did Richard feel when the coffee spilled? RICHARD WAS EMBARRASSED. 44: How did Paul feel when he caught his uife with another man? PAUL WAS SURPRISED. 95: How did Richard feel when Paul won the case? RICHARD WAS HAPPY ON PAUL'S BEHALF. Often the answer is reconstructed from a knowledge structure which was built when the situation (referred to in the question) was originally processed. The AFFECTS in the question-answering above are reconstructed from these knowledge structures: Ql (IPTs), Q2 (GOALS), Q3/Q4 (TAUS), Q5 (ACES). When Richard spills coffee, TAU-REGRETTABLE- MISTAKE is built. This structure represents BORIS’s knowledge about social blunders and how each role (i.e. SCHLEMIEL and SCHLIMAZEL) will feel about the blunder. The SCHLEMIEL should feel embarrassed while the SCHLIMAZEL is expected to be angry. Unless the characters’ reactions violate these expectations there is no reason to instantiate a specific emotional state in episodic memory. To answer Q3, we look for the default AFFECT associated with SCHLEMIEL in TAU-REGRETTABLE-MISTAKE. Likewise, the answer to Q4 is reconstructed from knowledge about TAU- RED-HANDED. Finally, Q5 is reconstructed from ACE FELICITATION, which was instantiated when Richard congratulated Paul. 6. Comparison With Other Work The BORIS affect model derives much of its inspiration from a paper by Roseman [Roseman, 1979) in which he developed five dimensions for categorizing emotions -- i.e. desirability (D), outcomes (0), probability (P), agency (A), and legitimacy (L). Using these dimensions, for example, Roseman represented fear as: D(+) O(-) P(-) or D(-) O(+) P(-). Thus, fear involves the chance that what you do not want may occur, or that what you do want may not occur. Much work on affect deals with the physiological [Izard, 19711 and psychological [Mandler, 19751 [Bower et al., 19811 [Schachter, 19661 complexity of the emotional experience itself. In contrast, BORIS is designed only to understand the conceptual significance of affective reactions on the part of narrative characters. To do so BORIS employs a representational system which relates AFFECTS to one another through decomposition and shared inferences. BORIS is raot intended to model emotional states or experiences themselves. 7. Conclusions The AFFECT model described here represents a first- order approximation of a very complex domain. BORIS affect theory is oriented towards understanding the intentional significance of emotional reactions by inferring their corresponding cognitive structures. Memory structures containing these reactions are exactly those structures which capture events of greatest importance to the characters involved. REFERENCES Bower, G. H., Gilligan, S. G. and Monteiro, K. P. Selectivity of Learning Caused by Affective States. Journal of Ezpetimental Psychology: General, 1981, llO(4). Dyer, Michael G. $RESTAURANT Revisited or ‘Lunch with BORIS’. In Proceedings oj the 7th I&et?tationaZ Joint Confernce on Artijicial Intelligence, August 1981. Held in Vancouver, BC. Dyer, Michael G. Integration, Unification, Reconstruction, Modification: An Eternal Parsing Braid. In Proceedings of the Seventh International Joint Conference on Artificial Intelligence, 1981. Dyer, Michael G. The Role of TAUs in Narratives. In Roceedings of the Third Annual Conjerence of the Cognitive Science Society, August 1981. Held at Berkeley, CA. Dyer, Michael G. In-Depth Understanding: A Computer Model of Integrated Processing jar Narrative Comprehension. Technical Report 219, Computer Science Department, Yale University, 1982. Ph.D. Dissertation. Dyer, Michael G. and Lehnert, Wendy G. Question Answering for Narrative Memory. In J. F. Le Ny and W. Kintsch (Eds.), Language and Co?nprehenaion, Amsterdam: North-Holland, 1982. (in press). Izard, C. E. The Face of Emotion. NY: Appleton-Century-Crofts, 1971. Lehnert, W. G. and Burstein, M. H. The Role of Object Primitives in Natural Language Processing. In Proceedings of the 6th International Joint Conference on Artificial Intelligence, 1979. Lehnert, W. G., Dyer, M. G., Johnson, P. N., Yang, C. J. and S. Harley. BORIS - An In-Depth Understander of Narratives. Artificial Inneelligence, 1982. (in press). Mandler, G. Mind and Emotion. NY: John Wiley, 1975. Robinson, J. A Sampling Psychology, 1976, 8. Autobiographical Memory. Cognitive Roseman, Ira. Cognitive Aspects of Emotion and Emotional Behavior. 1979. Yale Psychology Dept. Paper read at 87th Annual Convention of the American Psychological Association in NYC. Schachter, S. The Interaction of Cognition and Physiological Determinants of Emotional State. In Spielberger, C. D. (Ed.), Anziety and Behavior, NY: Academic Press, 1966. Schank, Roger and Abelson, Robert. Scripts, Plans, Goals, and Understanding. Hillsdale, NJ: Lawrence Erlbaum, 1977. The Artificial Intelligence Series. Wilensky, Robert. Understanding Goal-Based Stories. Technical Report 140, Department of Computer Science. Yale University, 1978. Ph.D. Dissertation. 268
1982
89
188
AN EXPERT SYSTEM THAT VOLUNTEERS ADVICE Jeff Shrager Carnegie-Mellon University Department of Psychology Tim Finin University of Pennsylvania Department of Computer Science I. Motivation and Svstem Overview This paper describes the design and expert system that provides novice users implementation of an Nith help in using the Vax/VMS operating system. The most interesting feature of our advisor is that it follows the user’s interactions with the system and volunteers its help when it believes that the user would benefit from advice. The user need not ask for help or raise an error condition. The adivsor recognizes correct yet inefficient command sequences and helps the beginner become more proficient by indicating how these tasks may be done more efficiently. What are the the inefficient command sequences that we are trying to recognize? There are several dimensions to inefficiency in operating system interactions: Operating systems provide many features (such as wild cards in file names, lists of verb targets, etc) which are meant to minimize the user’s work (e.g., typing). Consider: $PRINT PROGRESS.MEM $PRINT MEETING.MEM etc... rather than: $PRINT *. MEM On another dimension, we measure inefficiency in terms of system resources. The system typically provides special functions which perform operations much more sparingly than more general means would permit. Contrast: $COPY NOTES.* OLDNOTES.+ $DELETE NOTES.*.* with: $RENAME NOTES.* OLDNOTES.* The DCL expert that we have constructed recognizes less efficient sequences and constructs help messages that provide either immediate advice or a pointer to a manual or online HELP entry. Following is a sample of the DCL expert’s behavior (the first two lines are entered by the user): $COPY TEST.TXT EXPl.TXT $DELETE TEST.TXT: %If you mean to be changing the name of % TEST.TXT to EXPl.TXT you might have % simply used the command: % $RENAME TEST-.TXT EXPl.TXT % The HELP command can tell you mqre % about RENAME. II. General Approach The difficulty of this task is, of course, to recognize when some sequence of commands constitutes a plan that a person is using to achieve a goal. We have approached this problem by collecting a catalog of “bad plans” which novice users often use to achieve common goals. The problem thus reduces to matching command sequences to descriptions of generic plans from the catalogue. In this application, the matching process is complicated by the lollowing issues: l Non-contiguity: The individual commands which make up a sequence might be spread out over a session. Each of the intervening commands may or may not affect the goal which the overall sequence is meant to achieve. l Ambiguity: The mapping from plan to goals is many to many. A given sequence may match several plans. A given plan may be realized by several sequences. l The necessity of Extensional Knowledge: A given sequence may have side effects or use information not directly expressed in the syntax of the commands. In order to recognize which of several possible goals is being attempted one may, for example, have to expand “wildcard” patterned filenames and select names from the current directory which are refered to by the command at hand. Ill. Specific Method Our. goal recognition heuristic is driven by an expectation parser [l]. A KL-One like network [Z] describes the commands that form the heads of sequences to be recognized. Upon instantiation of any new entry in the net, some action takes place as specified by the parsing object for which the new instance represents a case. The actions can activate, deactivate, or modify other objects in the network. Since the contents of the net direct the parser, changing it can substantially effect the way in which future commands are processed and the actions to take place when they are parsed. In particular: parsing objects are added which recognize commands that come later on in the sequence whose head was just recognized. The action taken on recognition of the last command in a sequence typically invokes a help message. The actions have access to the network contents and thus can tailor the message to the case just recognized (or any other information contained therein since the contents of the net represent everything of which the user has demonstrated knowledge). 339 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. IV. A Detail Example $DIRECTORY NOTES.* /OUTPUT=OLDFILES.TXT A more general way of routing output to a file involves the separate commands ASSIGN and DEASSIGN to attach the selected file to the output channel as: $ASSIGN SYS$OUTPUT OLDFILES.TXT $DIRECTORY NOTES.* SDEASSIGN SYS$OUTPUT The following recognition sequence takes place: Upon recognition of an ASSIGN command, parsing objects for both the DIRECTORY command and the DEASSIGN command are added to the network. If a DEASSIGN command were to be entered at this point, the DIRECTORY object would be deactivated and the DEASSIGN object would deactivate itself. No goal is recognized. If, on the other hand, a DIRECTORY command were entered, the first DEASSIGN object would be detached and a new DEASSIGN object activated whose action represents a successful goal recognition and would trigger an appropriate help message. Note the necessity of the interim DEASSIGN command in order that an ASSIGN + DEASSIGN pair is not mistaken for one with a DIRECTORY command between them. V. Implementation Details In implementation, the user is communicating via a Franz Lisp program which sends the commands to VMS and then runs those that succeed through the goal recognition parser. Only commands, not the inputs of programs, are trapped. Commands which cause errors are not processed because we assume, first, that they will probably be immediately reentered. Also, the error will certainly cause the parser trouble if the problem was syntactic and will cause the goal recognition processor trouble if the error was semantic. The working system recognizes 5 complete goals and various short-cut strategies (combined command targets, etc). VI. Discussion There are implications of this research beyond the additional power that it affords expert systems in general. Since the knowledge network contains an object for each command that has been issued one can think of that database as a user profile. We have made only marginal use of this potential by arranging the actions so that advice is only given for commands that the user has not already used. This is done by, for example, removing the head parsing object of the COPY + DELETE = RENAME sequence when the beginner uses a RENAME command (this action is a part of the actions associated with the RENAME parsing object). Other novice-user aids could be tailored to this dynamic user profile. Our system essentially encodes the pattern recognition knowledge of a consultant and applies these patterns to user input. When inefficiencies are recognized, the automatic observer can point the way to a more effective use of system capabilities by refering the user to the help system or some other expert. Otherwise, it can generate a help message of its own using the context of the particular sequence to construct useful dialogue. Our work represents an attempt to extend the range of interactive advisor systems. User aids currently in service help only users who cause errors which would invoke an error recovery system or those who know how and when to ask for help. We have provided a means by which the system can automatically tutor intermediate level users who do not make trivial errors but who are not using commands in an effective way. We therefore bridge the gap between the introductory user aids and more technically oriented expert advisors. ACKNOWLEDGMENTS This work was performed while the first author was enrolled in the masters program in.the Department of Computer Science at the University of Pennsylvania. The paper outlines a Master’s thesis available as technical report number MS-CIS-81-l from that department. This work was supported, in part, by NSF grant number MCS 79-08401. Thanks to Ira Winston for construction of the Lisp/DCL interface, lots of useful advice, and excellent Vax systems support. Also, thanks to Peter Buneman for guidance in the morals of software engineering. REFERENCES and ABBREVIATED BIBLIOGRAPHY [l] Riesbeck, C. and R. Schank; Comprehension bv Computer: ExDectation Based Analvsis of Sentences in Context; Yale University CS report no. 84. [2] Brachman, Ron; A Structural Paradiqm for Remesentinq Knowledae; BBN Report no. 3605, July 1978. Ball, Eugene and Phil Hayes; ReDresentation of Task Specific Knowledae in a Gracefullv lnteractinq User Interface; CMU, in AAAI 1980. Mark, William; Rule Based Inference in Larqe Knowledae Bases; USC/Information Sciences Institute, in AAAI August 1980. Genesereth, Michael R.; The Role of Plans in Automated Consultation; Laboratory for Computer Science, MIT. 340
1982
9
189
On Being Contradictory Margot Flowers Yale University Department of Computer Science PO Box 2158 (YS) New Haven, CT 06520 Abstract This paper discusses recognizing and producing contradic- tions. After illustrating the phenomena of contradiction, the paper presents conceptual classes of contradiction and gives an overview of the how they can be recognized. The next part dis- cusses the construction of contradictions, in particular with re- spect to contradicting historic events. The object of this paper is to examine the computational logic of contradictions, using contradictions as an example of how reasoning processes can and must exploit semantic knowledge and episodic memory, and to illustrate the kind of metaknowledge needed to use cer- tain reasoning devices correctly and effectively. 1. Introduction How do we know when statements agree with one another and when they do not? Consider: The Arabs started the 67 war. la1 The Israelis started the 67 war. PI Most people would say that these two statements contradict one another, and the analysis they might provide is that [a] and [b] are similar statements which differ only by the actors. Ac- cording to this analysis, then, [c] and [d] below should also con- tradict one another: The Arabs imported arms. /c/ The Israelis imported arms. PI Obviously something is wrong with this analysis. The process of recognizing contradictions involves much more than simple structural matching. How can the differences between [a]/[b] and [c]/[d] be ex- plained? It is clear that any technique of determining what is a contradiction and what is not must hinge on conceptual meaning. Any technique based on structural form alone would fail, as illustrated above. The question to be explored here can then be rephrased as: What conceptual features determine contradiction or agreement relationships between concepts? This analysis of contradictions began in the context of ABDUL/ILANA, which models the participant in an argument about the Mid-East [Flowers, McGuire & Birnbaum, 1982, McGuire, Birnbaum & Flowers, 1981, Birnbaum, Flowers & Mcguire, 19801, and the techniques described here are cur- This work was supported in part by the Defense Advanced Re- search Projects Agency, monitored by the Office of Naval Re- search under contract NOOO14-75-C-1111, and in part by the National Science Foundation under contract IST7918463. rently being incorporated into HARRY [Flowers, 19821, a prc+ gram that models how people reason about historic events. The approach taken here is that two concepts are contradictory when people perceive that only one of them can be possible or true at the same time. Thus, this analysis is not concerned with logical contradictions, but rather contradic- tions as people recognize them and use them. An important aspect of the contradiction relation is that it is dynamic and context-dependent. The background knowl- edge that people bring to the process of conceptual analysis af- fects whether two concepts are interpreted as being contradic- tory or not. This is responsible for the fact that, among other things, additional information can cause seeming contradictions to be reanalyzed so that they are no longer contradictory. Con- sider: John walked out on Mary yesterday. That can’t be true, I just saw her today and ahe was fine. So far these sentences seem to contradict one another, but with the added information: Oh, that is because they made up this morning. what had seemed to be contradictory above, is no longer so. Contradiction is only one of many possible relationships that can exist between two concepts. Thus, the detection of contradiction is really only a part of a more geneml analysis process. The reason for addressing contradiction here explicitly is to identify and enumerate what must be included in the pr+ cess of conceptual analysis, and just as importantly the process conceptual generation, to account for the phenomena of contra- diction. Thus, two major topics of this paper are concerned with contradiction recognition and contradiction generation. 2. The Logic of Contradictions Often in explanations and arguments, for example, con- tradictory concepts are juxtaposed. For example, consider this fragment from ABDUL/ILANA: Israeli: The Arabs (started the 67 war], by the block- ade. (e/ Arab: But Israel fired first. If1 Here, the Arab contradicts the assertion that the Arabs started the 1967 war by introducing the fact that Israel attacked first. So recognizing that there is a contradiction and what is being contradicted is a necessary part of the understanding process. How are any two concepts analyzed to determine this relation- ship? 269 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. In this paper I present three classes of contradictions which can be identified according to how the concept is being contradicted. These classes identify how the analysis of poten- tial contradictions must proceed by specifying what the signif- icant parts of the concepts are and what related semantic knowledge is significant. These three classes are presented next. 2.1. Negations The first class of contradiction is contradiction 6~ negation. For example: The Arabs started the 67 war. The Arabs did not start the 67 war. This kind of contradiction is commonly identified in textbooks on logic and rhetoric. The primary source of the contradiction relationship in this case is that the action of one is negated by the modality of the action of the other, with rest of the two concepts matching. In some cases, negation contradictions can sometimes be identified on structural principles alone, without the application of semantic knowledge. Consider: The British srewolfed the tograms. The British did not srewolf the tograms. Although the meaning of these sent,ences is not clear, it is still obvious that they contradict one another. When the action negation is obvious at a structural level, contradiction recognition is not very difficult or interesting. But more difficult cases occur, for example: John payed attention to Mary. John ignored Mary. Here too, one sentence negates the other, but this time at a conceptual level. That is, the action “not paying attention” is the same as “ignoring”. Thus, the recognition of negativity cannot rely upon the analysis of structural properties alone. Such semantic information is even more important in the other two classes of contradiction. 2.2. Direct Contradiction The second class of contradiction is direct contradiction. For example: The Iraquis caused the Persian Gulf war. Id The Iranians caused the Persian Gulf war. iV WThy are these two sentences cont,radictory when others which resemble them are not? What is different about how they are analyzed that affects how they are related to one another? Here the source of contradiction is lies in the interaction of certain conceptual components rather than in action nega- tion. The key to determining this is based on general knowl- edge about wars and causation. In this case, we know that [g]/[h] contradict because, from our general knowledge about wars, we know that there can be only one primary instigator. In general, determining if one concept directly contradicts another hinges on recognizing that they describe the same idea except for the presence of contradictory role fillers. This re- quires analyzing the semantics of the concepts involved to de- termine when dissimilar role fillers actually conflict. For exam- ple, the internal representation of [g] is: (M-LEAD-TO ANTE (DO ACTOR *IRAQ*) CONSE E-PERSIAN-GULF-WAR) which says that Iran did something that started the Persian Gulf War. Similarly, [h] is represented as: (M-LEAD-TO ANTE (DO ACTOR *IRAN*) CONSE E-PERSIAN-GULF-WAR) In this case, the only difference between these concepts is the antecedent component of the two LEAD-TOs, specifically the ACTOR role. The goal here is to model the processes and assumptions used by pmple, naive about causation, who analyze and reason about situations. Among unsophisticated arguers over historic events, wars and other events have primary instigators: single responsible intentional rause-ers (although perhaps multiple causes). IIere, [g]/[h] b a ove are identified as contradictions af- ter examining the general semantic knowledge associated with M-WAR and seeing that it allows there to be only one cause-er. This means that both of the antecedents cannot be causes, thus the concepts directly contradict one another. The importance of general knowledge is assessing direct contradictions is illustrated by contrasting the analysis above with: The Iranians fought in the Persian Gulf war. The Iraquis fought in the Persian Gulf war. The analysis of these concepts also must refer to general knowl- edge about M-WAR as before, but in this case, it utilizes the knowledge about participants of wars. The only limitation on the number of participants in a war is that there must be more than one, thus these two sentences do not present a problem in interpreting them both to be valid at the same time. 2.3. Inferential Contradictions There is a large body of contradictions that do not fall into either of the two classes above, such as [e]/[f] above, and [i]/ [j] below: Arab: Israel is trying to take over the Mid-East. b-1 Israeli: If that were true, then how come Israel didn P take Cairo in X9? lil Here there is neither action negation nor contradictory role fill- ers. What then is the relationship between them? It is clear that there is some relationship that allows us to easily see [j] as contradicting [i], the problem is determining what this is. The solution is to notice that, from [j], one can make the in fetence that: Israel is not taking over the Middle East. w Then, [k], which is supported by b], negates [il. In general, from an inferential contradiction, one can infer either a nega- tion or a direct contradiction, Inferences such as these are one source of support and attack links used in the construction of argument graphs in ABDUL/ILANA. This then is an exam- ple of the third class: contradiction by inference, in which the inferred conceptualizat,ion establishes the link be- tween two contradictory concepts. Inferential contradictions are often the most effective types of contradictions. Negations and direct contradictions have no weight because they only state the fact of disagree- 270 ment, but provide no basis for that fact. Inferential contradic- tions on the other hand not only convey contention implicitly, but also explicitly provide a reason that contention is true. Thus, if we consider the construction of contradictions rather than their recognition, it is clear that seeking inferential contradictions is one heuristic to use. This leads to the second part of this paper: overviewing how contradictions are pro- duced. 3. Producing Contradictions There are a many reasons one might want to produce contradictions, for example, in response to a strategic decision t,o attack some concept, as part of a process to confirm or de- cide to disbelieve something some idea, or to verify input. Thus, the production of contradictions is embedded in the con- text of some other task and often are never produced explicitly. For example, in arguments, ABDUL/ILANA takes advantage of the fact that the best attacks are contradictions that are no- ticed during the understanding process, rather than ones which are explicitly sought [McGuire, Birnbaum & Flowers, 19811. The first two types of contradiction suggest two tech- niques to produce arbitrary contradictions. One way is to me- chanically construct a concept’s negation: The West Bank belongs to Israel. PI The West Bank doesn’t belong to Israel. This, as pointed out above, requires minimal semantic knowl- edge. A second way is to replace some arbitrary critical role filler with something conflicting: The West Bank belongs to Jordan. iw There are many problems with this approach. First of all, people are not usually called upon to produce some arbi- trary contradiction to a given a disembodied concept. Second- ly, the motivations of why a contradiction might be desired play a large part in what makes a good contradiction and what makes a bad one. For example, [m] is a good contradiction of [I] for a Jordanian to make, but a Palestinian would prefer in- stead to say: The West Bank belongs to the Palestinian people. Randomly changing one slot filler to another contradictory one does not reflect the plane and goale of the contradictor and is a fatuous thing to do. The lack of episodic context is another problem with this suggestion of arbitrary contradiction construction. Without an episodic memory providing relevant information, which anchors potential contradictions to the real world, to contradict [I] one could just as easily say: The West Bank belong8 to Britain. The West Bank belong8 to Iran. The West Bank belongs to Japan. The West Bank belong8 to Al Haig. These are useless contradictions. The ideal contradiction, for use in a naturalistic task like explanation or argumentation, obviously is one which has some validity in the world. Thus, the best contradictions are pro- duced, perhaps as side effects, by memory recall, rather than the application 0.f general semantic knowledge in the right way. 271 And since inferential contradictions are the most effective kinds of contradiction, inference classes provide useful heuristics for controlling memory search for contradictions. The key then is to determine what the useful kinds of inferential relations are. Here I will focus on making contradictions to historical event.s. 3.1. Contradicting Historic Events A powerful way to contradict an assertion about an his- toric event is to use a counterexample. For example, [i]/b] above, or: Israel is not trying to take over the Middle East. InI Why then does Israel keep the West Bank? loI Thus, looking in memory for a counterexample is a good way to produce a contradict,ion. One way counterexamples can be found is to exploit previously noticed failed ezpectatione, for example, a time Israel was expected to behave imperialisti- cally but did not. The details of this process are discussed else- where [Schank, 1982, Flowers, McGuire & Birnbaum, 19821. There are other sources of counterexamples, for example based in memory organization: Israel isn ‘t executing any terrorists. /PI Yes they are, they killed three last week. I91 The trouble is that seeming counterexamples do not always ef- fectively produce a contradiction. For example: Israel is importing arms. Irl Israel did not import any arms last tuesday. bl Thus, ucounterexamples” don’t apply all the time. Memory processes have to know when they should consider producing contradictions and when they should not. How can this be de- termined?. The \;ability of producing a contradiction can be broken down into two conditions: 1. the memory condition: Is there present some fact which can serve as a contradiction? 2. the eemantic condition: Is there a contradic- tion technique, such as counterexample, which is applicable to the concept to be contradicted? Meeting the memory condition is of course entirely dependent upon what is in memory and the access techniques used to find it. The principles underlying the semantic condition however depend upon the contradiction candidate itself. What are these principles? The counterexamples given so far illustrate one principle. The ones which “workn ([n]/[o] and [p]/(d) have been cases of saying: Someone is not doing an action. Yes they are, here is an instance. The ones which instead “fail” are cases of asserting: Someone is doing an action. No they aren ‘t, here is a time they didn ‘t. They illustrate a variation of the well known fact that: in or- der to disprove a rule, all you need to is find one case t,hat via- lates it; but to show it, it is not sufficient to illustrate one case that conforms to it, you have to show that every case conforms to it. This then identifies part of the semantic condition on the use of counterexamples: Counterexamples are effective in contradicting an assertion that some action or state is not occur- ring. However, this does not describe all of the aspects of the semantic condition. For example, there are cases in which counterexamples apply to contradicting positive assertions as well. Consider [i]/b] above, or: Israel is trying to take over the middle east. Then why haven% they taken Lebanon. This is a case of a counterexample being effective against a pa+ itive assertion. Clearly then, there are other principles involved in the semantic condition for making counterexamples. The goal then is to identify what other class of principles which affect, among other things, representational criteria and applicable inference classes. HARRY uses four historical events classes that embody some of this information [Flowers, 19821. Thus, historical event classes can be used to partially determine what kinds of contradictions are effective and should be sought when making a contradiction. They are: atomic event8 -- historical events whose decomposition is not “int’eresting” at the current level of analysis, i.e. one is concerned with the initial attack by Israel in 1967, but not individual tank movements, etc. episodic event8 -- sequences of causally/temporally linked events whose components are “interesting” and them- selves are significant events, i.e. the 1967 war, which con- sists of attacks, battles, cease fires, and so forth. etream event8 -- events which also consists of many sub- events but which are repeated instances of the same pro- totype, i.e. i.e. Israel importing arms. interpretative event8 - events based in the analysis of ac- tions rather than in physical actions, often involving goals, of indefinite duration, and not composed of sube- vents, i.e. i.e. Israel taking over the Mid-East. Because historical events partially determine how events are represented and organized in memory, as well as the kinds of inferences that can apply, they can be used by memory ac- cess processes to identify potentially applicable contradictions and what kinds of memory components to seek. The problem of determining when counterexamples are applicable to positive events instead of negative events is solved by applying more constraints on the semantic condition: Counterexamples are effective in contradicting an assertion that some action or state ie occurring only if the assertion is an interpretative event. According to this principle, [n]; a positive interpretative event, can be effectively contradicted by counterexamples like [o], but positive stream events, like [r], cannot. Some of the contradiction techniques whose applicability can be predicted by the historical event classes are: exemplars, . . counterexamples, instances, components, and initlatlng or ter- minating events. As with the examples above, not a!! of them apply to all kinds of events. Thus, the value of using these event classes is that they are one way of determining what to seek and what not t,o seek in memory in order to produce good contradictions, as well as for memory search and verification in general. 4. Conclusions Contradictions are very important to a variety of tasks involving reasoning skills. This paper has given an overview of the kind of methods being incorporated into the reasoning mod- el HARRY to recognize and produce contradictions and some of the information involved in doing so. The observations are these: ‘adictions iz very l Analyzing or recognizing contr different from producing them. l Contradiction processing occurs primarily at the conceptual level, rather than at the structural level, and is grounded in and directed by Beman- tic knowledge. l Contradiction memory. processing relys upon episodic l Contradiction processing is an interaction be- tween many sources of knowledge, including the goals of both the producer of the contradiction and by the understander of the contradiction. l Contradiction processing is a part of other pro- cessez: the understanding and generation of con- ceptual input can involve analyzing and produc- ing contradictions. Contradictions are only one of many reasoning devices. The use of any reasoning device requires knowledge about the use of the device itself and how it interacts with particular domains for it to be employed correctly and effectively. Acknowledgements Tbe author is grateful to Michael G. Dyer, who made many useful ed- itorial comments to a previous draft of this paper. The author also wants to acknowledge Larry Birnbaum and Rod McGuire, the other members of the ABDUL/ILANA project. Aspects of event classes have benefited from discussions with Gregg Collins, Ernie Davis, and Michael Wilk, as well as with Larry and Rod. References Birnbaum, Lawrence, Margot Flowers, and Rod McGuire. Towards an AI Model of Argumentation. fioceedings of the First AAAI Con jerence (Stan ford), 1980. Flowers, Margot. Reasoning and Memory: A Computer Model o j Human Reasoning. PhD thesis, Yale University, 1982. Flowers, Margot, Rod McGuire, and Larry Birnbaum. Adversary Arguments and the Logic of Personal Attacks. In Wendy Lehnert and Martin Ringle (Ed.), Strategies for Natural Language Processing, pp 275-293, Lawrence Earlbaum Associates, 1982. McGuire, Rod, Lawrence Birnbaum, and Margot Flowers. Opportunistic Processing in Arguments, pages 5840. Proceedings o j the Seventh Internation Joint Con jerence on Arti jicial Intelligence (IJCAI-81), 1981. Schank, R. C. Dynamic Memory: A Theory of Learning in Computers and People. Cambridge University Press, 1982. 272
1982
90
190
THE ROLE OF EXPERIENCE IN DEVELOPNENT OF EXPERTISE Janet L. Kolodner School of Information and Computer Science Georgia Institute of Technology Atlanta, GA 30332 INTRODUCTION Perhaps the most distinguishing feature of an expert is that when given a novel problem to solve in his or her domain of expertise, the expert can solve the problem easily. Novices, on the other hand, are good at dealing with typical problems or "classic" cases, but not novel problems. In people, the evolution from novice to expert happens as a result of being able to introspect and examine the knowledge used in solving problems. That introspection and examination allows people to learn from experience. A human expert can interpret a new case in terms of something (either a previous case or generalized knowledge) he is already familiar with. This implies that as an expert is having new experiences, he is evaluating and understanding them in terms of past experiences. In the process, he is integrating the new experience into his memory so that it too will be accessible to use in understanding a later case. AN EXAMPLE The following example 171 illustrates experiential learning in the psychiatric domain: Dr. X sees a patient who seems to show classic signs of major depression. She is 38 and complains of depression. In the last month, she has been unhappy, suffering from insomnia and crying. She reports poor concentration and diminished interest in activities, and that she has been depressed since childhood when her father deserted the family. She has previously been diagnosed as depressive, and was treated in a mental hospital with antidepressants. She was sickly as a child, has had a drinking problem, and has had a number of physical illnesses which doctors have not been able to find causes for. Seeing that she has been treated previously for depression, that her chief complaint is depression, and that she has insomnia, poor concentration, and diminished interest, Dr. X concludes that this patient is suffering from Major Depression, Recurrent, without Melancholia. He treats her with antidepressants. The antidepressants "this research has been supported in part by NIH- BRSG Grant #5 SO7 RR 07024-16 and in part by NSF Grant VIST-8116892. seem to work, but the woman keeps coming back complaining of additional major physical disorders. Dr. X begins to think that there may be some other problem which he had not accounted for. Taking a further history, he finds out more about the medical problems she has had. They have been numerous, and doctors have been unable to find organic reasons for them. He realizes that the large number of medical problems is important to consider. Going through the diagnostic process again using that symp- tom as the predominant clinical feature, he realizes that he should have diagnosed her for Somatization Disorder in addition to the diagnosis of Depression. As a result of this case, Dr. X should learn the following: 1. it is important to take medical history into account in choosing predominant clinical features, 2. depression can camouflage somatiza- tion disorder, and 3. a patient who is highly depressed but who complains about medical problems may be suffering from somatization disorder in addition to depression Using the first fact, he should be able to refine his rules for choosing predominant clinical features. This case should help him conclude that medical history may be a more important clinical feature than he had previously realized. In addition, the next time he finds that a treatment has failed, he may be biased towards finding out facts about hadn't known. the patient's medical history that he Using the second fact, he should be skeptical of diagnoses of depression coming from other doctors, and will want to find out more about the medical history of a new patient before taking a previous diagnosis seriously. The third fact gives the relationship between depression and somatization disorder, which could be helpful in diagnosing and treating later cases. Furthermore, this case should enable the doc- tor to hypothesize that there is often more than one symptom that is primary, and that a patient's chief complaint may not be the most important symp- tom to look at. This experience should also help him learn that current problems must be separated from long-standing problems and both must be taken into account. Later cases should enable him to 273 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. recognize and separate current from long-standing problems, to confirm these two hypotheses, and to learn specifically which long-standing problems and which chief complaints are likely to be significant. TEE SHRINK PROJECT We are investigating the relationship between experience and the development of expertise in the. domain of psychiatry. When complete, our program, SHRINK, will analyze new psychiatric cases based on previous cases it has seen. It will integrate new cases into its memory as it is processing them, and will build up its expertise based on generalizations it has made concerning the similarities between cases it has seen (similaritv- based learning, [21 131). Its expertise will also be heightened through analysis of failures in diagnosis and treatment (failure-based learning, 161 [4-J). A recently-published psychiatric manual ill provides a procedure for diagnosis. Knowledge about symptoms and knowledge about diseases are integrated into the process. Related disorders are organized in decision trees. Predominant symptoms (called primary indices) suggest entry points into those trees. The book also specifies sets of necessary, supporting, additional, and exclusion criteria for each currently-recognized psychiatric disorder. We are starting with the processes specified in that book as a model of a novice diagnostician, and giving our program experiences which allow it to reorganize that knowledge "episodically" and learn. The following is SHRINK making a diagnosis. --- Patient Background and Information --- The patient is Ms. 163, a female, 25 years old. She has been exhibiting the following symptoms: attempted suicide by CARBON-MONOXIDE SIGNIFICANT DECREASE in state S-WEIGHT SIGNIFICANT DECREASE in state S-MENTAL-FOCUS SIGNIFICANT DECREASE in drive D-SLEEP less than NORMAL on the HAPPINESS scale **** Begin Diagnostic Impression *** Deciding predominant clinical symptom: attempted suicide by SCARBON-MONOXIDE Starting differential diagnosis at: DD-MOOD-DISTURBANCE Possible diagnoses are: (MAJOR-DEPRES sI0~) **** Begin Diagnostic Evaluation **** . . . Symptoms confirm MAJOR-DEPRESSION Ms. 163 suffers from: MAJOR-DEPRESSION supported by: SINGLE-EPISODE-MAJOR-DEPRESSION supported by: DEPRESSIVE-EPISODE Currently, SHRINK knows about Major Depression and diagnoses normal cases of single-episode depression and manic depressive syndrome using the novice procedure. In addition to making diagnoses, the machinery for learning by experience is being Put into place. The system indexes cases it has seen, and can pick out the similarities and differences between individual cases. It keeps track of the decisions it makes in doing its diagnoses and records those decisions and the reasons it had for making them. We have also given it rudimentary treatment capabilities. SHRINK is now ready to learn. The first example it will work on is the one above. MEMORY STRUCTURES In refining task expertise, two different types of knowledge must be learned or refined - domain knowledge used by the reasoning process, and the reasoning process itself. In order to incrementally refine diagnostic procedures (i.e., the reasoning process), memory must have an explicit model of the procedures it is employing. We call these memory structures PROCESS MOPS (Memory Organization Packets, 151 [21). The prime PROCESS MOP is DIAGNOSE. The first step in psychiatric diagnosis involves examining the patient and choosing predominant symptoms. Probable disorders are chosen based on that set of symptoms (differential diagnosis). Each is then evaluated in more detail and unlikely diagnoses are deleted (diagnostic evaluation). Failures in the diagnostic evaluation can suggest additional disor- ders which must be evaluated. The diagnosis process is driven by diagnostic knowledge -- knowledge about particular disorders. The structures which record knowledge about particular disorders are called DIAGNOSTIC MOPS. The figure below shows some of the information SHRINK has about "Depressive Episodes", one of its diagnostic MOPS. Presence of a "depressive episode" is_necessary to diagnose all "major affec- tive disorders", including "major depression" (two more of its DIAGNOSTIC MOPS). A third type of knowledge memory must have is symptom knowledge -- knowledge of how particular symptoms and their combinations tend to predict particular diagnostic categories. Symptoms suggest entry points into differential diagnosis decision trees, thus suggesting general diagnostic categories a patient may fit into. Attempted suicide, for example, suggests severe mood distur- bance, which suggests a possibility of depression. Each step of the diagnostic process is guided by either disorder or symptom knowledge. After predominant clinical features (major symptoms) are chosen, the knowledge associated with them is used to choose starting points for differential diagnosis. Only those categories implied by the primary symptoms are considered in initial differential diagnosis. Differential diagnosis is guided by knowledge about the relationships between diagnostic categories. Disorder knowledge (in particular, knowledge about the normal symptoms and 274 exclusion criteria associated with particular similar cases occur, the knowledge necessary to disorders) also guides diagnostic evaluation once deal with them will already be in memory. Second, possible disorders have been established through a deviation signals that additional reasoning must differential diagnosis. be attempted. Extra effort is then applied to "explain" the deviation. If an explanation is _________________-__~~~~~~~~~~~~~~~~~~~~~~~~~~~~-~ found, diagnostic procedures are updated, and that case is maintained as support of the new procedure. DEPRESSIVE EPISODE If no explanation is found, the deviation marks a problem that must be resolved later. Later, when a ESSENTIAL FEATURES: similar problem case is encountered, the marker at least 1 of: signals that both the old case and the new one (1) dysphoric mood should be examined to see if between them they (2) pervasive loss of interest or pleasure in provide enough evidence to explain the violation. usual pastimes and activities. This is the process of failure-driven learning. SUPPORTING FEATURES: Markers in process MOPS allow incremental process at least 4 of: changes. Markers in diagnostic MOPS allow (1) a significant change in appetite. refinements in diagnostic categories. (2) a disturbance in sleep habits. (3) psychomotor agitation or retardation. The Example in More Detail --- (4) decrease in sexual drive. (5) loss of energy or fatigue. To see how we aim to get the computer to learn (6) feelings of worthlessness, self-reproach, incrementally, we will consider the example above or excessive or inappropriate guilt in more detail. (7) complaints or evidence of diminished ability to think or concentrate. In order to make the major depression (8) recurrent thoughts of death, suicide, death diagnosis, the doctor must have the following wishes, or suicide attempt. knowledge: time constraint: symptoms must have been present simultaneously and for at least 2 weeks 1. Depression, when reported by a EXCLUSION CRITERIA: patient, means severe unhappiness. (1) preoccupation with a mood-incongruent delusion 2. Severe unhappiness is a form of mood or hallucination (=> PSYCHOTIC DISORDER) disturbance. (2) bizarre behavior (poss => PSYCHOTIC DISORDER) 3. Mood disturbance (a symptom) suggests ADDITIONAL CRITERIA: that the patient suffers from major negate diagnoses of: depression. (1) SCHIZOPHRENIA (2) SCHIZOPHRENIFORM DISORDER This knowledge allows a differential diagnosis of (3) ORGANIC MENTAL DISORDER "major depression" to be made. We also assume that (4) UNCOMPLICATED BEREAVEMENT the doctor knows and can recognize symptoms of a TREATMENT: depressive episode. He recognizes three supporting choose a combination of symptoms for a depressive episode in addition to (1) antidepressant treatment mood disturbance -- insomnia, poor concentration, (2) hospitalization and diminished interest. Because the patient has (3) ECS therapy previously been diagnosed as depressive, he is wil- (4) analysis ling to conclude that she has had a depressive episode even though only three symptoms are present _____________-_____________L__L___L_____~--------- (normally, diagnosis of a depressive episode requires four symptoms). The doctor thus concludes INCBEKENTAL LEARNING that the patient has Major Depression, Recurrent, without Melancholia. Note that because a rule was In order for failures in procedure to refine stretched, this judgement may be suspect, and that previously-held diagnostic rules, memory's if a failure occurs, this step will have to be organization must be updated with each new examined as a possible reason for the failure. We experience. For that to happen in the memory assume that this doctor knows that a common treat- structures we have defined, two things must happen ment for major depression is antidepressants, and when an experience deviates from the expected. therefore prescribes that treatment. As a result First, the deviant path through appropriate PROCESS of the medication, her depressive symptoms go away, MOPS must be recorded. Second, deviant features of the expected successful result of the medication. this case with respect to previous must be recorded in DIAGNOSTIC MOPS. similar cases There are two reasons for recording these differences. First, they should be recorded so that when a similar deviation occurs later, the original case can be remembered. The two cases can then be examined for similarities, and generalizations based on their similarities can be used to evolve a new diagnostic category. This results in similarity-driven learning. When later In this case, however, there is also an additional response to the medication. The patient complains of a number of physical symptoms. This is where failure-based reasoning comes in. This is unexpected, and must be explained. There are two problems to look at here - the process of noticing violations and the process of explaining them. A violation of an expectation may be a reaction com- pletely different than the expected one, or there may be varying degrees of resemblance. Alter- 275 natively, as in this case, a violation is recognized even though the expected reaction is present, since there is additional behavior that cannot be accounted for. The problem of “assigning blame” for expecta- tion violations or failures is a hard problem both for people and for computers. In this case, there are two things to explain. First, causes for the new problem must be found. Second, the doctor must find if the new problem could have been prevented (i.e., what, if anything, failed in the diagnosis or treatment process). There are three patient’s new complaint possible . . causes for a. the medicine is producing side effects, b. the patient has developed a new physical ailment, or C. the patient is imagining her disor- ders. the Each of these possibilities must be checked out. We assume the first doesn’t hold. Considering the next possibility, appropriate diagnostic tests should be performed on the patient. After taking further history, performing a physical examination, and doing screening tests, if no organic reason for the illness is found, then the doctor should be “reminded” [51 E21 of the patient’s previous medical history. This should happen because this experience is similar to the previous medical experiences the patient has reported to the doctor -- she has a history of physical illnesses for which doctors have been unable to find causes. This should lead the doctor to take a further ,history to check the possibility of Somatization Disorder. Note that there are many types of “reminding” that need to be done in the processing we are sug- gesting. The reasoner (whether human or computer) should be reminded of previous cases similar to the one it is currently working on so that knowledge gleaned from those cases can be used to reason about the new case. Such reminding is required, for example, in relaxing diagnostic rules. In addition, the reasoner must be reminded of episodes associated with the case it is current ly working on, so that it can notice patterns of behavior associated with the current patient. The implica- tion here is that memory must be organized along both of these dimensions. Reminding must also allow a third type of knowledge to be remembered - symptom knowledge. Recall that when the doctor realizes that the patient is probably imagining her symptoms and that she has a long history of imagining physical disor- ders, he hypothesizes that she might be suffering from Somatization Disorder. In order for him to make that hypothesis, he must be reminded of the following piece of symptom knowledge: 4. Excessive medical history in f implies a somatization disorder emal es Why can the doctor remember this rule now when he didn’t remember it during the initial diagnosis? Our claim is that knowledge only becomes available when there is a way to direct memory processes toward it. He did not remember this rule initially because he was not focussing on the medical history of the patient in trying to come up with predominant clinical features. Instead, he was f ocussing on the then current problem. Because he was focussing entirely on the mood disturbance, only know 1 edge associated with mood disturbances was accessible for reasoning. Only when attention is directed to the patient’s physical disorders and medical history as possible clues to her illness, does the knowledge associated with those symptoms become available. Memory must be organized so that relevant knowledge is available when attention is directed to it, and so that irrevelant knowledge does not get in the way of reasoning. Using the symptom knowledge listed above, the doctor can now hypothesize that the patient has a Somatization Disorder in addition to depression. He does a differential diagnosis and diagnostic evaluation based on that hypothesis and finds that her past medical history does support the hypothesis. At this point, the doctor has corrected his initial mistake. In order to learn from it, he must figure out whether and why he made the mistake. In general, the problem of figuring out where in the process a mistake was made is hard. In the domain of diagnosis and treatment, there are four possibilities: a. the diagnosis was wrong, and ther ef ore the treatment is unsatis- factory , b. the diagnosis was right, but the treatment was not appropriate to the diagnosis C. the diagnosis was right, but the treatment didn’t work, or d. the treatment and diagnosis were right, but some thing new has come up. In finding the cause of the new complaint, the doc- tor has already found that an additional diagnosis had to be made, so the original diagnosis could have been wrong (a>. On the other hand, he might not have had the necessary information initially to make this diagnosis. Therefore, (d) could also be the case. To distinguish between the two of these, the doctor must decide whether he had the necessary information in the beginning to make the correct diagnosis. The key to making that decision is figuring out where in the diagnosis there might have been a problem and how it could have been corrected. Diagnosis consists of a number of processes and in general, a reasoning failure may happen during any of them. In this case, however, there is a direct route to finding the initial failure. Having corrected the mistake, the diagnostician has the crucial piece of information that allows him to figure out where in the process he went wrong. He knows that the symptom knowledge 276 in rule 4 would have been necessary to initially diagnose Somatization Disorder. Furthermore, he knows that symptom knowledge of this sort is part of the initial diagnosis process -- establishing primary indices. His error, then, was in choosing predominant clinical features. Once he knows where in the process the error occured, he must determine whether he had enough information initially to include Somatization Disorder originally. He did have this knowledge, since the patient had already reported having been sickly and having had a number of illnesses for which no organic causes could be found. He concludes that he should have paid attention to that initially, and marks "medical history" as a patient feature which he should pay more attention to in the future. In addition to concluding that medical history is an important clinical feature to consider in diagnosis, something more specific is learned - the relationship between somatization disorder and depression. This will be represented in a number of ways in episodic memory. First, there will be markers or indices associated with major depres- sion. One way this case differed from normal major depressive cases is that the patient had a large number of previous medical illnesses. That patient feature will be one of the features which index this individual case among the major depression cases diagnosed and treated. If another case comes up in which the patient is depressive and also has had a number of previous illnesses, then this case, which included an additional diagnosis of somatiza- tion disorder, will be remembered. This case will also be indexed as one in which the patient began complaining of other illnesses after treatment for depression seemed successful. If another similar case comes up, the doctor or system can be reminded of this case. That should cause him to wonder whether there is a previous medical history which he had not elicited from the patient, and if the second patient also has somatization disorder. Episodic memory will contain similar markers associated with somatization disorder relating it to depression. DISCUSSIOM There are two major implications of the model we have presented. First, it is set up to handle exceptional cases very nicely. The general case is stored as a diagnostic category. In any but the exceptional cases, the general knowledge associated with the diagnostic category is used for diagnosis and treatment. As exceptions are encountered (e.g., the somatization case above where normal depressive treatment was inadequate), they are indexed off of the diagnostic category. If an explanation of the exception has been made, it is stored along with the exceptional case. When a new case is reminiscent of a previous exceptional case, knowledge about the previous case can be used to deal with the new case. When an exception has been encountered and dealt with successfully a number of times, it evolves into a new diagnostic category with its own specialized diagnostic and treatment rules. The general rules associated with the original diagnostic category do not change, however, unless an exception becomes the generalized case. Another advantage this approach has is that it deals with both experiential knowledge and facts in the same way. Both are stored in the same struc- tures and organized identically. This means that both are equally accessible and both can be used in reasoning. An implication of this organization is that it is amenable to new information from any source. New treatments and methods of diagnosis discovered through experience and those learned through journal articles or from others can be added to memory in the same way. Hopefully, the same processes used for reorganizing memory due to failure in experience can be used to reorganize memory based on new information acquired elsewhere. Of course, as in people, only through experience will all the implications of such knowledge be learned and added to memory. This paper has pointed out how experience aids in developing the expertise necessary for expert reasoning. It has also introduced a computer program based on these ideas. The research and the program are still in a state of infancy. Nevertheless, we see this approach as having a great deal of promise both in terms of implementing expert computer systems and in helping us to under- stand the cognitive processes underlying expertise. Dr. Robert M. Kolodner, a psychiatrist at the Atlanta VA Medical Center, has been the major informant for this project. Keith McGreggor has done all of the programming. Both have been help- ful in formulating the ideas put forth in this paper. REFERENCES [ll American Psychiatric Association (1980). Diapnostic & Statistical Manual of Mental -_- Disorders (Third Edition). American Psychiatric Association, Washington, D.C. [21 Kolodner, J. L. (1980). Retrieval and organizational strategies in conceptual memory: computer model. deport #187. Dept. Ph.D. thesis. Research of Computer Science, Yale University, New Haven, CT. 131 Lebowitz, M. (1980). Generalization and Memory in an Integrated Understanding System. Ph. D. Thesis. Research Report #186. Depart- ment of Computer Science. Yale University, New Haven, CT. 141 Riesbeck, C. K. (1981). Failure-driven remind- ing for incremental learning. In Proceedings of IJCAI-81. 151 Schank, R. C. (1980). Language and memory. Cognitive Science, Vol. 14, No. 3. [61 Schank, R. C. (1981). Failure-driven memory. Cognition and Brain Theory. Vol. 1. -- [71 Spitzer, R. L., Skodol, A. E., Gibbon, M., and Williams, J. B.W. (1981). DSM-III Case Book. American Psychiatric Assoc., Washington, D.C. 277
1982
91
191
THE CMU ROVER Hans P. Moravec Robotics Institute Carnegie- Mellon University Pittsburgh, PA 15213 Abstract An advanced camera-equipped mobile robot is being built at the ChlU Robotics Institute to support research in control. perception, planning and related issues. It will bc used initially to continue and extend visual navigation work (the Cart Project) completed in 1981 at Stanford’s AI Lab, but is dcsigncd with mechanical, sensor and controller flexibility which should permit it to support a wide variety of other efforts. Int reduction Experience with the Stanford Cart [2], a minimal computer controlled mobile camera platform, suggcstcd to me that, while maintaining such a complex piece of hardware was a demanding task, the effort could bc worthwhile from the point of view of Al and vision rcscarch. A roving robot is a source of copious and varying visual and other sensory data which force the dcvclopmcnt of general techniques if the controlling programs arc to be c\‘cn minimally successful. By contrast the (also important) work on disembodied visual data often focuses on relatively small image sets, and is in some danger of ovcrspccinlizing. Drawing an analogy with the natural world, I believe it is no mere co-incidcncc that in all cases imaging cycs and large brains evolved in animals that first developed high mobility. Other groups have come to similar conclusions [3] or have developed sophisticated rovers with an eye to near term applications [l] [4]. The new Llnimation rover [l] is very similar in size, shape and mechanical capabilities to the machine WC arc building. It achieves a full three dcgrccs of freedom of floor-plant mobility by USC of three novel “omnidirectional” wheels which, by virtue of rollers in place of tires, can freely move in a direction broadside to the wheel plane, as well as performing, under motor control, the usual wheel motion. WC achieve the same cffcct with stccrablc wheel asscmblics, described below. Overview The project so far has been focused on dcvcloping a smoothly functional and highly capable vchiclc which will scrvc a wide variety of future rcscarch. A hint of this future work can bc found in [2] and later in this paper. The shape. ‘size. steering arrangements and onboard and cxtcrnal processing capabilities of the rover system were chosen to maximize the flexibility of rhe system (,naturally limited by present day techniques). l‘hc robot is cylindrical, about a meter tall and 30 cm in diameter (Figure 1) and has three individually stccrablc wheel assemblies which givs it a full three dcgrccs of freedom of mobility in the plant (Figures 2, 3). Initially it will carry a TV camera on a pan/tilt/slide mount, several short range infrared and long range sonar proximity detectors, and contact switches. Our design calls for about a dozen onboard processors (about half of them powerful 16 bit MC68000s) for high speed local decision making, servo control and communication (Figure 4). Serious processing power, primarily for vision, is to be probided at the other end of a remotc-control link by a combination of a host computer VAX 111780 an FPS-100 array processor and a specially dcsigncd high performance analog data acquisition and generation device. We hope eventually to provide a manipulator on the rover’s topside, but there is no active work on this now. We chose the high steering flexibility of the current design partly to ease the requirements on a future ann. The weight and power nccdcd can rcduccd by using the mobility of the rover to substitute for the shoulder joint of the arm. Such a strategy works best if the rover body is given a full three degrees of freedom (X, I’ and angle) in rhc plane of the floor. Conventional steering arrangcmcnts as in cars give only two degrees at any instant. Details Three dcgrccs of freedom of mobility are achicvcd by mounting the chassis on three indcpcndently steerable wheel asscmblics (Figure 2). The control algorithm for this arrangcmcnt at cvcry instant orients the wheels so that. lines through their axles meet at a common point. Properly orchestrated this design permits unconstrained motion in any (211) direction, and simultaneous indcpcndent control of the robot’s rotation about its own vertical axis. An unexpected benefit of this agility is the availability of a “reducing gear” cffcct. By turning about the vertical axis while moving forward the robot dcrivcs a mechanical advantage for its motors. For a given motor speed, the faster the rover spins, the slower it travels forward, and the stccpcr the slope it can climb. (Visualization of this cffcct is left as an cxercisc for the reader.) 377 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. To permit low friction steering while the robot is stationary, each assembly has two pnrallcl wheels conncctcd by a diffcrcntial gear. The drive shaft of the diffcrcntial goes straight up into the body of he robot. A concentric hollow shaft around this one connects to housing of the diffcrcntial (Figure 3). ‘I‘urning the inner shaft causes the wheels to roll forwards or backwards. turning the outer one steers the asscmb]y, with the two wheels rolling in a little circle. ‘I’he assemblies were manufactured for us by Summit Gear Corp. Each shaft is connected to a motor and a 4000 count/revolution optical shaft encoder (Datametrics K3). The two motors and two encoders are stacked pancake fashion on the wheel assembly, speared by the shafts. There are no gears except for the ones in the differential. (Figure 3 shows a schematic cross section of a complete motor/wheel-assembly structure, and a partially assembled stack in the flesh). The motors are brushless with samarium-cobalt permanent magnet rotors and three-phase windings (Inland hl[jtors UM-3201). With the high energy magnet material, this design has better pcrformancc when the coils arc properly scqucnccd than a conventional rotating coil motor. The coils for each arc energized by six power MOSFETs (Motorola h4TP1224) mounted in the motor casing and switched by six opto-isolators (to protect the controlling computers from switching noise) whose LEDs arc connected in bidirectional pairs in a delta configuration, and lit by three logic signals connected to the vertices of the delta. The motor sequencing signals come directly from onboard microprocessors, one for each motor. Thcsc are CMOS (Motorola MC146805 with Hitachi Hh46116 RAMS) to keep power consumption reasonable. Each processor compares the output of its shaft cncodcr to a desired motion (supplied by yet another processor as a time paramcterizcd function) and energizes or de-energizes the appropriate motor phase to keep the motor on track with no wasted power. Since each motor has a potential power dissipation of 250 watts this is more than a nicety. The shaft cncodcr outputs and the torques from all the motors, as estimated by the motor processors. arc monitored by another processor, rhc Sitwlaror. a Motorola MC68000 (with all ChlOS support circuitry the power requirement for our 32K 68000 is under one watt. The new high pcrformancc 74HC scrics CMOS allows operation at full 1OMHz speed.) , which maintains a dead-rcckoncd model of the robot’s position Encoder Motor Encoder ing Motor Gear mential from instant to instant. ‘I‘hc results of this simulation (which represents the robot’s best position cstimatc) arc compared with the desired position, produced by another processor (68000), rhc Conrroller, in yet another processor (SSOOO), (he c‘on~uc/or, which orchestrates the individual motor processors. The conductor adjusts the rates and positions of the individual motors in an attempt to bring the Simulator in line with requests from the Controller, in what amounts to a highly non-linear feedback loop. 378 Other onboard processors are: Communication A 68000 which maintains an error corrcctcd and chcckcd packet infrared link with a large controlling computer (a VAX 11/780 hclpcd out by an FPS 100 array processor and a custom high speed digitizer) which will do the heavy thinking. Programs run in the Controller arc obtained over this link. Sonar A 6805 which controls a number of Polaroid sonar ranging devices around the body of the rover. These will be used to maintain a rough navigation and bump avoiddncc model All measurcmcnts and control functions of this processor and the following ones arc available (on request over a serial link) to the Controller. Camera A 6805 which controls the pan, tilt and slide motors of the onboard TV camera. This compact camera (Edo Western 1631) broadcasts its image on a small UHF or microwave transmitter (made by 3DRM lnc?). The signal is rcccivcd rcmotclg and the video signal captured by a high bandwidth digitizer system and then read by the remote VAX. There are tentative plans for a minimal onboard vision system using a 68000 with about 25613 of extra memory onboard the rover. for small vision tasks when the rover is out of communication with the base system. Proximity A 6805 which monitors several short range modulated infrared proximity detectors which serve as a last line of defense against collision, and which sense any drop off in the floor, and contact switches. Utility A 6805 which senses conditions such as battery voltage and motor temperature, and which controls the power to non-essential but power hungry systems like the TV camera and transmitter. Communication bctwccn processors is serial, via Harris CMOS UARTs, at a maximum speed of 256 kilobaud. The Conductor talks with the motor processors on a shared serial lint and the Controller communicates with the Sonar, Camera, Proximity, Utility and any other peripheral processors by a similar method. The processors live in a rack on the second storcy of the robot structure (Figure l), between the motor and battery assembly (first floor) and the camera plane (pcnthousc). Figure 4 shows the rack alone, not fully populated. The rover is powcrcd by six scaled lead-acid batteries (Globe gel-cell 12230) with a total capacity of60 amp hours at 24 volts. The motors are powcrcd directly from thcsc. the rest of the circuitry dcrivcs its power indirectly from them through switching DC/DC convcrtcrs (Kepco RMD-24-A-24 and Semiconductor Circuits l-1717262). Each 6805 processor draws about one eighth of a watt each 68000 board only one watt Physically the robot is a mctcr tall and 55 ccntimctcrs in diameter. It weighs 90 kilograms. The maximum accclcration is one quarter g, and the top speed is IO kilomctcrs/hour. With appropriate onboard programming ‘the motions should bc very smooth. ‘I’hc great steering flexibility will permit simulation of other steering systems such as those of cars, tanks and boats and other robots by changes in programming. CO*4rnonlcat~on u t68000) t #Je -WJ05 4z1 * Prox:mity Cor,duttov (6POOO) a 379 Progress As of this writing (June 8, 1982) we arc awaiting final fabrication of the remainder the robot’s mechanical parts. and have bench prototypes of all the major electronics. Debugging is proceeding furiously. Three instances of each of the two processors arc completely operational. Low level control software has been debugged on a rover simulation, but remains to be t&ted on hardware. WC feel confident that a minimal demonstration of a running rover can bc arranged for the mid August date of this conference. Promises The hope expressed in the introduction, that mobility somehow forces evolution in the direction of true intelligence, may already be being vindicated. Our efforts to devise a suitable programming language useful for writing Controller scripts for interesting robot tasks floundcrcd until we realized clearly that controlling a rover is an essentially different task than controlling a robot arm. An arm, in the typical assembly line setting, executes a scqucncc of tasks, possibly with branching, that can be visualized and charted in advance. Exceptional conditions are an infrequent nuisance that can bc handled by an error exit or a small number of interrupt activated routines that reset the situation to some standard state and return control to the mainstream code. This approach does not work for programming a rover in typical environments because almost evegr condirion encounrered by a rover is exceplional. Obstacles and opportunities cannot in general be antidipatcd in advance, and if WC attempted to deal with them by interrupt coding, our stacks would soon overflow as exception interrupted exception. We arc now working on an approach in which the Controller is govcmcd by a set of production rules, each I’LIIC having an activating condition involving the state of sensors and internal variables and conscqucnccs which affect intcmal variables and motor functions. Conceptually the potentially rclcvant rules arc active at all times, and may take over control from each other as preconditions change. Conflicts arc mcdiatcd by weights (or confidcncc lcvcls) produced by activated rules. Scqucnccs of steps arc made possible in this arrangement by the introduction of scqucncc variables, which trigger rules and arc then changed by those rules to trigger succcssivc ones. ‘To maintain some cohcrcncc of behavior thcrc can also bc variables which affect the preconditions of large classes of rules - for instance a variable 1 that at a low value causes the robot to ignore minor signals from its 1 proximity ycnsors while responding to them vigorously if high. A 1 scqucncc of proximity signals would gradually ruisc the value of this 1 variable. and the robot’s behavior would change from boldly pursuing 1 its main goal (whatcvcr that may be) to cautiously feeling its way 1 around. WC arc tcmptcd to _civc such variables names like “fear” and “enthusiasm”, but will refrain from doing so for the moment. I Our system must operate in real time, a need which distinguishes it 1 from typical rule based expert systems. WC currently imagine a scheme 1 in which, for large jobs. the entire rule system is maintained in the 1 remote VAX. with cflicicntly indcxcd rule subsets likely to be relevant 1 being dynamically downloaded as conditions change. Even so it is clear that the load on the Controller will be heavy. We arc preparing a 256 Kbytc memory expansion board for it, and arc looking forward to a I challenging job. Other interesting preliminary thinking has rcsultcd in a scheme by which a very simple arm with only three actuators will cnablc the robot, making heavy use of its great steering flexibility. to enter and leave through a closed standard office door. Discussions with the distributed sensor net crew here has raised the possibility of equipping the robot with ears, so it could passively localize sound, and thus perhaps respond correctly, both semantically and gcomctrically, to a spoken command like “Come here!“. WC arc also toying with the idea of a phased array sonar with about 100 transducers operating at 50 KHz which, in conjunction with the high speed analog conversion device mentioned above and the array processor. would be able to produce a modest resolution depth map (and additional information) of a full hemisphere in about one second. by sending out a single spherical pulse, then digitally combining the returned echoes from the individual transducers with different delay patterns to synthesize narrow receiving beams. Support ‘This work is being lavishly funded by the Office of Naval Research. References I. Carlisle, B. and B. Shimano. l‘hc Unimation mobile robot. personal communication. 2. Moravec, H. P. Obsracle .4voidarlce arld ,Vavigarion in the Real World b)s a Seeing Robot Rover. Ph.D. II., Stanford University, Scptcmber 1980. 3. Raphael. B.. The Thinking Computer. W. H. Freeman and Company, San Francisco, California, 1976. 4. Yakimovsky, Y. and R. Cunningham. “A Systcrn for Extracting Three-Dimensional Measurements from a Stereo Pair of TV Cameras.” Computer Graphics and Image Processing 7 (197X), 195-210. 380
1982
92
192
SOLVING TIIE FIND-PATI I I’ROBLEIM BY GOOD REI’RESENThTION OF FREE SPACE Rodney A. Rrooks Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, Ma. 02139 ABSTRACT Free space is rcprcscnted as a union of (possibly overlapping) generalized cones. An algorithm is prcsentcd which efficiently finds good collision free paths for convex polygonal bodies through space Iittercd with obstacle polygons. The paths are good in the sense that. the distance of closest approach to an obstacle over the path is usually far from minimal over the class of topologi- tally equivalent collision free paths. The algorithm is based on characterizing the volume swept, by a body as it is translated and rolatcd as a generalized cone and dct.ermining under what conditions one generalized cone is a subset of another. 1. INTRODUCTION __- The find-path problem is well known in robotics. Given an ol?jcci with an initial location and orientation, a goal location and orientation, aud a set of obslncles located in space, the problem is to find a continuous path for the object from the initial position lo the goal position which avoids collisions with obstacles along the way. This paper presents a new representation for free space, as IliLt urn1 “freeways” between obstacles, and a new algorithm to solve find--path using that representation. The algorithm’s advantages over those previously presented are that it is quite fast and it finds good paths which generously avoid obstacles rather than barely avoiding them. It does not lind possible paths in extremely cluttered situations but it can be used to provide direction to more computationally expensive algorithms in those cases. A common approach to the find--path problem, used with varying degrees of sophistication, is the configuration space ap preach. Lozano-Perez [j] gives a thorough mathematical treat- ment of configuration space. The idea is to determine those parts of free space which a reference point of the moving ob- ject can occupy without colliding with any obstacles. A path is thei! found for the reference point through this truly free space. Dealing with rotations turns out to be a major difficulty with the approach - requiring complex geometric algorithms which are computationally expensive. Conceptually one can view configuration space as shrink- ing the object to a point while at the same time expanding the obstacles inverse1.y to the shape of the moving object. This ap- proach works well when the moving object is not allowed to - This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Sup- port for the Laboratory’s Artificial Intelligence research is pro- vidcd in part by the Office of p!avaI Research under Oifice of Naval Research contract hOOO14-81-I<-0494 and in part by the ,Advanced Research Projects Agency under Ofice of Naval Research contract N00011-80-C-0505. rotate. If it can rotate then the grown obstacles must be em- bedded in a higher dimensional space - one extra dimension for each degree of rotational freedom. Furthermore the grown obstacles have non-planar surfaces, even when the original prob- Icm was completely polygonal or polyhedral. Typically im- plementors have approximated the grown obstacles in order to be able to deal with rotations. Moravcc [6] had to solve the find -path problem in two dimen- sions. He bounded all obstacles and the moving object by circles. Then the grown obstacles were all perpendicular cylinders and the problem colild be projected back into two dimensions where rotations could be ignored. This method missed all paths that reqtiircd rotational manoucvcring. Lozano-Pkrez [5] split the rotation range into a fixed number of slices, and within each slice bounded the grown obstacles by polyhedra. Earlier Udupa [S] had used much cruder bounding polyhedra in a similar way. Recently Brooks and Loznno- Perez [3] developed a method for computing more directly with the curved surfaces of the grown obstacles. The algorithm can bc quite expensive although given enough time and space it can find a path if one exists. The algorithm presentcci in this paper is based on a different idea; that of using representations of space which capture the essential effects of translating and rotating a body through space. Free space is represented as overlapping generalized cones since they are descriptions of swept volun~cs. The coues provide a high level plan for moving the object through space. The volume swept by the particular object as it is translated and rotated is characterized as a function of its orientation. Find-path then reduces to comparing the swept volume of the object with the sweepable volumes of free space. This is done by inverting the characterization of the volume swept by an object, to determine its valid orientations as it moves through a generalized cone. Throughout this paw we restrict our attention to the two- dimensional problem where the object to be moved is a convex polygon and the obstacles are represented as unions of convex polygons. II. DESCRIBING FREE SPACE ,4S GENERALIZED CONES ~__ Generalized cones are a commonly used representation of volume for modelling objects in artificial intelligence based com- puter vision systems. They were first introduced by Binford [2]. A generalized cone is formed by sweeping a two dimensional cross section along a curve in space, called a spine, and deforming it according to a sweeping rule. In this paper we will consider a two dimensional specialization of generalized cones (although we will still refer to them as voIumes). The spines will be straight and the cross sections wil! be line segments held perpendicular 381 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. to the spine with left and right radii. The sweeping rule will independently control the magnitudes of the left and right radii as piecewise linear functions. Free space is reprcscnted as overlapping generalized cones. Natural “freeways”, elongated regions through which the ob- ject might be moved, are represented as generalized cones. The generalized cones overlap at intersections of these natural free- ways. Figure 1 illustrates a few of the gcneralizcd cones describ- ing the free space around two obstacles in a confined workspace. The representation of free space is constructed by examining all pairs of edges of obstacle polygons. If the edges define a natural “freeway” through space they are used to construct a generalized cone. A. Candidate generalized cones. Each edge has a “free” side and a “full” side, where the “free” side is the outside of the particular polygon with which it is associated. Each edge has an outward pointing normal, pointing into the “free” side. Two edges arc accepted as defining a candidate gerlerajizcci cone if the-v meet the following two re- quirements. (1.) At least one vertex of each cdgc should be on the “free” side of the other. (2.) Tile dot product of the outward poiming normals should be negative. The second condition en- sures that the “free” sides of the twci edges essentially face each other. Thus in this initial stage there is a complexity factor of O(n”), where n is the number of edges of obstacle polygons. Given a candidate pair of edges, a spine is constructed for the generalized cone. It is the bisector of the space which is on the “free” side of bot,h edges. (Thus if the edges are parallel the spine is parallel to them both, and equidistant, or if the edges are not parallel then the spine bisects the angle they form.) The generalized cone occupies the volume between the two defining edges. At each vertex of the two edges (if the vertex is on the “free” side of the other edge) the cone is extended parallel to the spine. The generalized cone so defined may not lie entirely in free space. There may be other obstacles which intersect it. Each obstacle is compared to the generalized cone. A polygon can be intersected with the cone in time O(n) where n is the number of edges of the polygon. If the intersection is empty then nothing further need be done. If not then the intersection is projected normally onto the the spine of the generalized cone. This is illustrated in figure 2. Again this is an O(n) operation in the number of vertices (and hence edges). The result of comparing all obstacles to the gcneralizcd cone is a set of regions of the spine where there is no obstacle which intersects the cone in a slice normal to the spine. Each disjoint slice which includes parts of the original two edges is then accepted as a generalized cone describing part of free space. Clearly the complete operation of describing free space is at most O(d) in the number of edges in the obstacle polygons. I3. The representation used for generalized cones. -- Figure 3 shows the complete representation used for cones describing parts of free space. There is a straight spine, param- eterized over the range i! E [0, l] where 1 is the length of the cone. If the sides of the cone are not parallel to the spine then t = 0 corresponds to the wider end. On both the left and right the maximal radii achieved over the length of the cone occurs at t = 0 and are denoted bl and b, respectively, describing the “big” end of the cone, The minimal radii achieved occur at t = I and are denoted Q and sr, describing the “small” end of the cone. If Figure 1 boundary. Spine L A few of the generalized cones generated by two obstacles and the workspace Figure 2. Slices of the generalized cone removed due to obstacles. bi = s/ and b, = s, then the two sides of the cone arc parallel to the spine. If not, then lherc is a symmetric thinning of the cone (which may start, and end, at different values of t on the left and right) where the left nnd right radii of the thinning parts of the cone are both given by the expression ml + c where m and c are constants. Note that it is always the case that m 5 0 and c > 0. - In summary, the seven constants I, bl, b,, sl, s,, m and c completely specify the shape and size of the generalized cone. In addition its location and orientation must be detcrmincd con- currently with computing these parameters. III. 11ETI3RRlINING LEGAL ORIENTATIONS Let the moving object be called A. It is a convex polygon and has vertices al, ~2, . , a,,. Choose an origin and z and y axes. Let d, be the distance of vertex u, from the origin. For optimal performance of the find -path algorithm the origin should be chosen so that maxI 5-,(,, d, is minimized over possible origins. The direction of the *-axis can be chosen arbitrarily. Orientations relative to A are defined in terms of an angle rela- tive to A’s z-axis. Thus a point’s angle relative to A is the angle made by a ray from the origin to the point, relative to the Z- axis. A direction relative to A is the angle made by a vector in that direction with n’s s-axis. Let ~1, ~2, . . . , v,~ be the angles of ihe vertices al, u2, . . . , arZ. A. The radius function of a polygon. ~~ Consider first the problem of determining the volume swept by a polygon as it is moved in a straight lint with fixed orienta- tion. Let the object be moving in direction 0. The swept volume depends on the object’s cross section perpendicular to that angle. The cross section can be broken into two components; that on the left of the direction of movement, and that on the right. Figure 4 illustrates. Define a radios function r?(c) to be the infimum of the dis- tance from the origin that a line which is normal to a ray at angle 6 can be witlrout intersecting the interior of A. Figure 5 illustrates, (The radius function is closely related to the sup- port of a convex polygon - e.g. see [l].) The magnitudes of the left and right cross sections of figure 4 can then be denoted R(O + a/Z) and R(0 - n/2) respectively. Figure 6 shows the geometric construction ol R(c) for a given object n and angle t. The darkened oulline in figure 7 shows the radius function R in polar coordinates for the same object. Thus I2 can be defined by The major interest in functions of this form will be in their inverse images of intervals of the form (-00, r]. Let Thus RI-(r) is the range of angles at which the cross section of object A is no more than r. The inverse imnge can be easily computed by using the two values possible for arccos(r/di) for each i to form an interval containing qZ and subtracting it from the interval IO, 2x1. We restrict our attention to moving objects which are convex polygons precisely because their radius func- tions can be so easily inverted. If we write R,,(E) = R(E + a) then the legal orientations of A relative to the direction of its movement down the center of a strip of diameter d can bc written as L = R$(d/2) n R&,,(d/2). Note that while L may consist of more than one interval, the set of orientations taken by F!, in some trajectory along the length of the cone, must form a connected set -- i.c. it must be a single interval, and so must be contained in a single interval of L. Finally note that the sum of two radius functions has the same form as a radius function, and so the “1” operation can bc computed just as easily on the sums of such functions as on single functions. I.e. with R as above and -. max l<i<?x,l-Cj<m (4 cos(E - vi) + eJ cos(( - pj)) and each term inside the “max” can be written in the form JlJ cos(C - 4.J. l3. 13ounding polygons with an appropriate rectangle. -___ -__-___ -- 111 general we wish to find the legal orientations for object A whell its origin is at a point wilir spine parameter t on the spine of a generalized cone C. A conservative set of orientations can be found by enclosing A in a rectangle which has two of its sides parallel to the spine of C. When A is oriented so that the spine has angle B rclativc to it, the bounding rectangle has Figure 3. A generalized cone defined by I, bi, b,, 8, s,, m and c. Figure 4. Volume swept by an object during pure translation. Figure 5. Dcfnition of R(E). a4 R(E) = 4 cos(E - m) Figure 6. Geometric constructlon of R(e) Figure 7. R(c) in polar coordinates. 303 the following properties. Its extent forward (in the direction of increasing spine parameter) from the origin is n(0). Its extents left and right are n(U + x/2) and 11(0 - 7r/2) respectively. Its extent to the rear is (rather conservatively) d = maxi~i~7zdi. Figure 8 illustrates. The problem now is to invert this bounding operation - i.e. to find for what range of 0 the bounding rectangle is within the generalized cont. That is for each t E [O,l] we wish to find a range of valid 0, denoted V(t) for orientations of the object A. The rear of the bounding rectangle simply implies V(t) = 0 outside of the interval [d, I]. The forward bound implies V(t) 5 RL(l - t). The parallel sides of the “big” part of the generalized cone imply upper bounds on the width of the bounding rectangle. Thus V(t) C (@(S) - r/2) n (J+(h) + 7r/2). Note that the terms on the right involve adding a constant to a subset, of [0, 2?r] in the obvious way. Let V’ be defined as v’(t) = RL(l - t) n (R-L(bi) - r/2) n (Rl(b,) + r/2) over the interval [d, I]. Subject to the above three constraints the rectangle can everywhere be as big as one which will fit in the “small” part of the generalized cone. Thus (@l(s) - $2) n (R%r) + 7d2)) n V’(t) C V(t). Furthermore, subject to V’(t), any orientation of A is valid which results in a bounding rectangle that is within the bounds given by the decreasing boundaries of the generalized cones. Recall that at each point, of the spine, with parameter t, these boundaries have both left and right radius of wc! + c and that m ( 0. Thus sufficient conditions are that RP + 7a 2 m x (t + fw)) + c R(O - 7r/2) < m x (t + R(0)) + c are both true. These can be expressed as IR n/2 - mW) 5 (mt + c) P--n/2 - n*l(Q 5 W + 4 whence ([R~p-mR]L(mt+c)n[R-.,~p-mR]~(mt-+-c))nV’(t) c V(t). In summary then, over the range t E [d,d], a valid subset of orientations for A can be expressed as V(t) =R*(l - t) n (R+) - w/2) n (Rl(b,) + 42) n ( (F-(4 - 42) n WL(4 + r/2)) U ([&,Q - mR]l(mt + c) n kLp - mRIL(mt + 4)) and elsewhere V(t) = 0. The key property of V(t) is the following. LEMMA: For ti, fz f [d, 11, t1 _( t2 implies V(t,) C V(t& Note that V(t) d oes not necessarily include all possible orien- tations for object A to be contained in cone C at point with paremeter t along the spine. IIowevcr it the set of possibilities which the find-path algorithm described here is willing to con- sider. Its key property is that as stated by the lemma the set of valid orientations (amongst those to be considered) does not increase as the object is moved from the “big” end to the “small” end of a generalized cone. IV. SEARCIRNC FOR A PATH_ The algorithm to find a collision free path proceeds as fol- lows. First the generalized cones which describe free space are computed (this can be done in time O(~L:‘)). Next the cones are pairwisc examined to determine if and where their spines inter- sect. Up to this stage the algorithm is independent of the object A to be moved through the cnviroinmcnt. Each spine interscc- tion point for each cone is now annotated with the subset of [0,2x] which describes the orientations for the object A in which it is guaranteed to bc completely contained in the generalized cone The final stage of the algorithm is to find a path from the initial position to the goal position following the spines of the generalized cones and changing from cone to cone at spine intersection points. If t,hr object is kept. at valid orientations at each point on each spine then the path will bc collision free. .4. .4 searchable mrnph. p-p-sL - A graph is built then searched with the A’ algorithm (see Nilsson [7]), The cost function used is the distance traveled through space. The nodes of the graph consist of points on generalized cone spines which correspond to points of intersec- tion, along with a single orientation subinberval of 10, 27r] (except that intervals may wrap around frown 27r to 0). Thus for an intersection point with parameter t on the spine of cone C, a node for each interval in V(t) is built. The arcs in the graph arise in two ways; those that cor- respond to transfer from one intersection point on a spine to another, and those that correspond to transfer from one spine to another at their common intersection point. All such candidate pairs of nodes are checked for connectivity. The lemma of section 3 guarantees that it sullices to check if the orientation intervals of the two nodes, for intra-cone arcs, have a non-empty inter- section. If so, then it is certainly possible to move the object A from one point to the other using any orientation within that intersection. For inter-cone pairs of nodes it also suffices to check for non-empty intersection of their points are coincident in space. an 13. Intermixing rotations and translations. The graph can now be searched to find a path consisting of ordered set of nodes from initial position to goal position. orientation intervals, as the If the graph search is successful it indicates a whole class of collision free paths. It remains to chose one particular trajectory. The class of paths found share a common trajectory for the origin of the moving object. The orientation interval associated with each node of the path through the graph constrains the valid orientations at that point. It is also necessary to ensure that no invalid orientation is assumed during translation and rotation along the spine of a generalized cone. It may well be the case that rotations are expensive, and it is worth searching the class of paths resulting from the graph search to find the optimal path in terms of least rotation neces- sary. This could be carried out using the A* algorithm. Every node in the path found in the first search would be turned into a 384 collection of nodes. Each pair of adjacent nodes in the path found ;,bove would have their orientation ranges intersected. Combine each end value of each intersection, and the initial and final orientations into a set of possible orientations. Each orientation which is valid for a particular path node, along with that node, gcncratcs a node in the new graph. Adjacency would bc inherited from adjacency of generating nodes in the original path. The A’ algorithm is used to search this new graph. Figures 9 and 10 illustrate two paths found by a much simpler method. Rotations are rcstriclctl to points corresponding to nodes of the path - i.e. tllc object is held with fixed orientation during cnch translation. Thr orientation at each point is the midpoint of the orientations given by intersecting the allowed orlcntation intervals of the corresponding path node, and the subsequent node. Thus traversal of intra-cone arcs of the graph built above are rotation free, and traversal of inter-cone arcs arc pure rotatiolle. The lemma of section 3 guarantees that this strategy leads to valid collision free paths. V. IjS-E!;‘UI,NESS AND Al’PLICAl~lLITY The major drawback of the find-path algorithm presented in this paper is that paths arc restricted to follow the spines of the gencralizrd cones chosen to rcprcscnt free space. It typically does not work well in tightly constrained spaces as there are insulfcient generalized cones to provide a rich choice of paths. Furthermore, in a tight space the interior points of spines are alv;a~s near to the spine ends. Thcrcforc the need to extend the bounding rectangles rearward to the maximum possible distance for the moving object (so that the lemma of section 3 will hold) means that typically there are very few points of intersections of generalized cones where the object’s bounding rectangle in each orientation is contained within the appropriate generalized cone. Tllis situation could bc improved significantly by developing a better algorithm for decomposing free space into generalized cones. The current pruning tcchniquc is often too drastic - if an obstacle intersects a gencralizcd cone it may be better to simply reduce the cone radius, rather than the current practice of slicing out all of the spine onto which the obstacle projects. A. Advantages. ~- In relatively uncluttered environments the algorithm is ex- tremely fast. Furthermore it has the following properties which are absenl, from some or all of the algorithms mentioned in section 1. 1. The paths found, and the amount of computation needed are completely independent of the chosen world coordinate system. 2. Obstacles affect the representation of free space only in their own locality. Thus additional obstacles spatially separated from a free path found when they arc not present can riot affect the ability to find that free path then they arc present. (Resource linritcd, and approximating algorithms which cut fret: space into rectangloid cells often sulTcr from this problem.) 3. Free space is represented explicitly rather than as the comple- ment of forbidden space. 4. The paths tend to be equally far from all objects, out in truly free space, rather than scraping as close as possible to obstacles, making the instantiation of the paths by mechanical devices subject to failure due to mechanical imperfections. We plan to use this algorithm in conjunction with an im- plementation of an algorithm ([3]) which finds paths with rota- tions based on a configuration space approach. The generalized Figure 8. An object A is bounded by a rectangle aligned with the spine of cone C. Figure 9. A path found by the algorithm Figure 10. A path found by the algorithm. 385 cone method will be used to find the easy parts of the path (the other method is computationally expensive both for easy and hard paths) leaving the more expensive method for the hard parts. The algorithm presented here can also provide some direc- tion for the hard possible paths. parts as it can quickly compute all topologica1ly B. Three Dimensions. -~ Clearly the eventual goal of this work is to develop algo- rithms which solve the find-path problem in three dimensions with three rotational degrees of freedom for each link of general articulated devices. Ii1 this section we discuss approaches to ex- tending the presented algorithm to the case of a single convex polyhedron moving through a space of polyhedral obstacles. Three dimensional generalized cones can be constructed by considering all triples of faces of obstacle polyhedra. The con- struction algorithm (including intersection with obstacle poly- hedra) has complexity O(n’) in the number of obstacle faces. The generalized cones so construcled will have triangular cross- sections. It is not clear how to pair-wise intersect the generalized cones. as in general their spines will not intersect. The “path- class” idea presented below may solve this problem. Rathrr than a bounding rectangle; a right, triangular prism should be used where the triangle is similar to the cross section of the generalized cone being travcrscd. If more than one rotational degree of freedom is assumed there may be problems inverting this bounding volu me. Rather than L’(t) having a set of intervals as a value, it will have a set of t,hree~~dimensional polyhedra. Both for the two dimensional and three dimensional problems it is worth investigating using paths other than the spine through a generalized cone. Then V is parameterized in terms of t and the two end points of a whole class of paths through individual cones. This increases the complexity of the search but will lead to more paths than simply using the spine. C. Complexity Issues, __- In discussions with Tom6sI,ozano-Pbrez it became evident that there exists an O(n c ’ fig”) algorithm which can find the same gcnernlized cones as does the O(n”) algorithm presented here. The spines used for generalized cones correspond to the Voronoi boundaries between obstacles (see Drysdale [I] for a complete solution to the problem), and they can be found in O(YZC~“~ ‘I) time. There remains a factor of O(n) to do the . . intersection of the hypothesized generalized cones with other obstacles. It is unlikely that. such an algorithm would perform better than the O(n”) algorithm presented for practical sized problems. The complexity ofpairwise intersecting thegeneralizcd cones was not addressed in the body of the paper. An upper bound of O(?Z”) can easily be obtained as there are at most O(YX~) general- izcd cones to be considered. Iiowcver, it seems hard to find a constant where cn2 generalized cones can be acheived construc- tively as the number of obstacle polygons is increased. For any c It stems that space eventually tends to become too cluttered for all the cones to be constructed. Thus it seems likely that a better bound than O(n ‘) exists. (In fact the Voronoi complexity above suggests a bound of O(n2c2e).) REFERENCES [1] Benson, Russell V.; Euclidean Geometry and Convexity, McGraw--Hill, 1966. [Z] Binford, Thomas 0.; Visual Perception by Computer, Invited Paper at IEEE Systems Science and Cybernetics Con- fercnce, Miami, Dec. J971. [3] Brooks: Rodney A. and Torn& Lozano-Perez; in prepera- tion, 1982. [4] Drysdale, R. L. (Scot); C jeneralized Voronoi Diagrams and Geometric Searching, Stanford CS Report STAN-CS-79- 705, Jan. 1979. [5] Lozano-Perez, Tom&; Automatic Planning of Mani- pulator Transfer blovemenIs, IEEE Trans. on Systems Man and Cybernetics, SMC-11, 1981, 681-698. [G] Moravec, IIans P.; Obstacle Avoidance and Navigation in the Real World by a Seeing Robot Rover, Stanford AIM-340, Sept. 1980. [7] Nilsson, Nils J.; Problem-Solving Methods in Artificial Intelligence, McGraw-IIill, 1971. [8] Udupa, Shrirarn M.; Collision Detection and Avoidance in Computer Controlled Manipulators, I’rocecdings of IJCAI-5, MIT, Cambridge, IMa., Aug. 1977, 737-748. 386
1982
93
193
FINAL POSITION CONTROL IN PLANAR I IORIZONTAL ARM RlOVEhIENTS Jonathan Delatizky Artificial intelligence Laboratory Massachusetts Institute of Technology, Cambridge, MA 02139 ABSTRACT Simplified control strategies based on the concept of “Final Position Control” for free reaching movements of the human upper limb have been simulated in a mode1 incorporating six lumped muscles. The resulting movement repertoires bear some resemblance to normal human movements, but do not provide a complete explanation for observed behavior. The significance of these findings are discussed, and some implications for manipu- lator control are suggested. I. INTRODUCTION The planning and control of movements by a linkage - such as a human limb - is a difficult task. This problem is com- pounded in biological systems by complex nonlinear properties of the actuators (i.e. muscles). It is therefore appealing to speculate that the nervous system has evolved strategies for simplifying the planning and control problem. This hypothesis suggests that the biocontroller is able to exploit specific properties of the mus- culoskeletal plant in order to utilize simple controls for generating movements. Studies of human arm reaching movements have shown that the hand usually follows a straight line path in the absence of other instructions. Curved movements appear to be made by seg- mentation into relatively straight sections.‘?’ The velocity of the hand in straight movements follows a symmetrical bell-shaped profile.’ While this scheme may simplify planning of movements, there is no a priori reason to suppose that has advantages for the controller; in fact it may make control more difficult. It has been suggested that one possible simplified control strategy may be to specify activation levels for all muscles such that equilibrium is established at the desired end point.3 At the start of movement these activation levels would be set, and the time course of the movement would be determined solely by the dynamics of the actuators and linkage. These arguments depend on consistent spring-like properties of muscles.3j4 While this scheme is appealing because of its simplicity, it is not clear whether it is sufficiently powerful to explain observed behavior. We have therefore simulated two degree of freedom arm movements in the horizontal plane using several schemes in which additional constraints are combined with final position control This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Sup port for the Laboratory’s Artificial Intelligence research is pro- vided in part by the Office of Naval Research under Office of Naval Research contract N00014-81-K-0494 and in part by the Advanced Research Projects Agency under Office of Naval Research contract N00014-80-C-0505. Also supported in part by USPHS grant AM26710. in order to realize realistic trajectories. Sufficient alternatives have been explored that we believe the conclusions to be true for all such schemes. The results suggest that these strategies alone are not suitable candidates for biological motor control, since additional trajectory control is necessary. II. Methods. Planar horizontal arm movements in which gravity plays no part have been simulated. A simplified model of the human arm has been developed. The limb segments, viewed as links, are modeled as cylindrical rods. Mass and inertia estimates were obtained from Hollerbach & Flash” who used a standard com- mercially available program (ANSEPA, from DIOMLIB, Postfach 860765, D-8000 Munich 86, West Germany).” The kinematics and dynamics are equivalent to a standard two link planar manipulator, except that joint torques are developed by muscles. Six spring-like muscles act at the joints with constant moment arms. There is one pair of antagonistic muscles at each joint, plus a pair that cross both joints, as do biceps and triceps in man. Muscles are modeled as variable stifl’ness springs, with nervous system activation determining the stiffness. A minimum stiffness corresponding to passive properties is specified for each muscle. Linear viscosity is incorporated. Stiffness and viscosity both scale with activation. Since real muscles do not generate instantaneous changes in force, a simple first order lag with a time constant of 25 msec is used to model the typical 70 msec rise time of a muscle twitch. We believe that if a control strategy is inadequate in this model, it will be even less well suited to controlling real limbs. A simulation program has been written in Lisp and imple- mented on MIT Lisp Machines. The dynamics are integrated using a fourth order Runge-Kutta algorithm with a step size of 10 msec. Plots of the resulting trajectories and of joint angles and hand velocities are made using Lisp Machine graphics software. Further details of the programs are available from the author. III. Choice of inputs to the muscles. A target point in two dimensions is uniquely specified by the Cartesian coordinates of the hand or by the two joint angles. Since there are six muscles in the mode1 additional constraints are necessary in order to resolve this redundancy. Several methods for selecting such constraints and obtaining complete solutions have been tested. For any such strategy more than one feasible solution may exist. In such cases the one which minimizes the sum of muscle forces at equilibrium is chosen. This criterion implies minimum metabolic energy expenditure in maintenance of posture. Linear programming is used in this seiection, so that one or more muscles will always be minimally activated. Strategies for choosing muscle activation are 387 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. 1. Pure final position control (pFPC). Just the two equi- librium equations are specified. The overall level of activation can be modulated within the simplex by specifying a minimum sum of all stiffnesses (2 equations). There are no constraints on the trajectory itself. 2. Directed acceleration final position control (dFPC). In addition to the constraints specified for pFPC, we require the initial acceleration of the hand to be directed towards the target, encouraging straight line movement (3 equations). 3. Directed stiffness final position control (sFPC). In this case the equilibrium position and the Cartesian stiffness tensor at that point are specified (5 equations). The rationale is that a potential energy field with steep walls perpendicular to the desired direction of movement can be created, thereby penalizing deviations from straight line paths.7 Two-joint muscles should be especially useful in implementing these controls. 4. Synergical final position control (cFPC). The six muscles are split into two antagonistic synergies, each of three muscles, in whatever manner minimizes total force at equilibrium. The synergical linkages are chosen to generate linearly dependent equilibrium equations, fixing the ratio between agonist and an- tagonist activation. Consequently the overall level of activation can be adjusted without affecting the equilibrium. This strategy differs from the preceding three in that it forces all six muscle activation levels away from their minima. There is no explicit way to specify the time of the movement in any of these strategies. Indirect control of movement speed is possible through modulation of overall activation: increased activation results in larger forces, increasing movement speed. In return for this limitation one gains the advantage of an extremely Figure 1. Simulated movements using different single command strategies. A: Pure final position control. B: Directed acceleration final position control. C: Directed stiffness final position control. D: Synergical final position control. In each case the large solid square represents the position of the shoulder joint, and the semicircle is the outer limit of the workspace. The path of the hand is shown. Movements end at the small solid square. Two lines are drawn at the start point: the long line is the direction of initial force at the hand, while the short line (not always visible) is the direction of initial acceleration. All four simulations represent similar movement,s away from the body of about 20cm extent. 388 simple control. IV. Results. A standard format has been followed in presenting results: only the hand trajectory within the workspace is depicted. Due to space limitations only one characteristic movement is presented for each strategy. In each case a movement starting in front of the shoulder and directed straight ahead is shown. pFPC (Figure 1A). Movements often deviated considerably from straight line trajectories, and could frequently best be described as “wild”. Indirect control of speed through overall activation level was not very successful. Velocity profiles showed minima at reversal points in the movement and at points of high curvature. At best these were much less symmetrical than the prototypes. dFPC (Figure 1B). Again there was often considerable devia- tion from straight lines, though some movements were very good. The figure shows one of the better results. The velocity profile was highly assymetrical. There were several points in the workspace at which no set of muscle stiffnesses could satisfy all constraints. Control of movement speed was least successful using this method. However the trajectories were more realistic than pFPC. sFYC (Figure IC). Setting the stiffness tensor as described above resulted in straighter movements, often comparable to prototypes. Some movements still displayed high curvature. Velocity profiles were again assymetrical. There were many points in the workspace where the constraints could not be satisfied, and there were severe limits as to how different the eigenvalues of the stiffness tensor could be made (the ratio of eigenvalues determines the steepness of the potential energy “valley”). While a wide range of movement speeds was possible, higher velocities necessitated extremely high levels of muscle activation together with the considerable co-contraction needed to realize the stiffness tensor. If the stiffness levels were made sufficiently large (typically an order of magnitude greater than any seen in human limbs) then movements with minimal curva- ture could be generated. cFPC (Figure 1D). In most cases only one synergy resulted in a feasible solution. The results were not radically different from sFPC: movements were gently curved, velocity profiles were assymetrical and a fair degree of control of speed was possible. There was substan- tial co-contraction since muscle activation levels were linked. In general the two-joint muscles seemed to be less strongly activated than the others, though this may have been an artifact of the implementation algorithm. Since this synergy trategy takes no account of the starting point, it requires minimum information for its implementation. V. Discussion. None of the methods proposed was able to account for ob- served human behavior, However all did have rich behavioral repertoires, many of which are interesting in their own rights. There were two strategies that required no information as to the starting location (pFPC and cFPC). Although this information is usually available, a strategy of this type would be useful to an organism that had lost sensory feedback - these are essen- tially open loop motor programs. Since deafferented animals are able to move without visual feedback, the nervous system must have at least one open loop strategy available. The ability to control the form of the trajectory is severely restricted in these schemes. However there is ru) means by which detailed control of trajectory to a target is possible if the initial state is unknown. Both sFPC and dFPC attempted to escape from this limita- tion, in both cases at the cost of increased complexity in deter- mining muscle activation levels. In both cases an additional cost was a reduction of the part of workspace in which the strategy was realizable. This may not be a real limitation, since accurate control of movement may only be necessary in the restricted region in which most manipulation occurs. dFPC was unable to generate reliably straight trajectories. sFPC was considerably more successful in realizing this requirement, but was limited by the poor agreement with hand velocity profiles and the high levels of coactivation of almost all muscles necessary to generate movements at normal speeds. The results are interesting in the context of manipulator control7 in that they suggest that strategies such as sFPC may be useful where low precision, computationally inexpensive free movements are required. The most obvious application is in manipulators with tendon actuators. However the kinematic properties of the muscle-like actuators used here may be easily simulated using conventional torque motors or hydraulic ac- tuators. Such schemes are intrinsically compliant, another poten- tial advantage over conventional controls. In conclusion, “single command” schemes employing final position control are not flexible enough to be a sufficient ex- planation for upper limb reaching movements. However it is possible that they form part of the overall motor controller, with a superimposed detailed trajectory p1an.s This form of combined control would simplify the transition from movement to posture. Further insight into that component of arm movements that is a consequence of arm and muscle kinematics and dynamics has been obtained, resulting in improved understanding of the control problem. References. [l] P. Morasso. (1981) Spatial control of arm movements. Exp. Brain Res. 42: 223-227. [2] W.K. Abend, E. Bizzi and P. Morasso. (1982) Human arm trajectory formation. Brain: in press, [3] A.G. Feldman. (1974) Change of muscle length due to shift of the equilibrium point of the muscle-load system. Biophysics 19: 534538. [ 41 A. Polit and E. B izzi. (1979) Characteristicsof motor programs underlying arm movements in monkeys. J. Neurophysiol. 42: 183-194. [5] J.M. H II b h o er ac and T. Flash. (1982) Dynamic interactions between limb segments during planar arm movements. Biol. Cybernetics: in press. [6] Il. Hatze. (1979) A model for the computational determina- tion of parameter values of anthropomorphic segments. South African C.S.I.R. Technical Report TWISK 79 . [7] N. Hogan. (1980) Mechanical impedance control in assistive devices and manipulators. Proc. Joint. Auto. Control. Conf. 1: TAlO-B. IS] E. Bizzi, N. Accornero, W. Chapple and N. Hogan. (1982) Arm trajectory formation in monkeys. Exp. Brain. Res. 46: 139 143. 389
1982
94
194
DEVELOPMENT OF AN OVERALL DIRECTION-OF-ACTION SENSOR FOR ROBOTS Tokuji Okada Electrotechnical Laboratory l-l-4, Umezono, Sakura-mura, Niihari-gun, Ibaraki, 305 Japan ABSTRACT Development of an optical sensor for measuring overall direction-of-action is treated. The measurement principle is based on distortions found in the projected image of a ball moving freely in a spherical vessel under the influence of the acceleration of gravity and motion. The measurement is carried out in a static condition. Thus, the sensor output stands for the direction of the gravitational field. Experimental results are presented to show that the angular errors are within 12 degrees for the overall direction. This sensor can be widely applied for measuring angular variations from the direction of the gravitational field. I INTRODUCTION Angular values of joints are available to detect the orientation or direction of an end effector of robots. In a multijointed system, it requires a lot of computation to determine the orientation or direction, and the load becomes larger when the system has many joints like the nose of an elephant. Therefore, sensors for the direction measurement are required, however, direction sensors of which the mechanisms are unified like a gyroscope have still not been developed [1,2]. Two parameters expressing the direction are now measured separately by means of cantilevers and pendulums. These are suitable to detect the direction in a specific plane, but a few drawbacks can be pointed out in measuring an arbitrary direction. In order to solve these problems, we proposed two methods for direction measurement [3]. In this paper, we describe the development of a direction sensor based on the second method in [31' This sensor uses the projected image of an opaque ball housed in a bigger transparent spherical vessel and can measure the overall direction-of-action. First, an outline of the measurement principle is given. Secondly, we describe the design of the sensor head and procedures to extract directional information from the projected image, Thirdly, experimental results which show the usefulness of the sensor for measuring all the directions are presented. II PRINCIPLE OF DIRECTION MEASUREMENT - Sensor equipment is basically shown as in Fig.l(b). It is composed of a point source of light, an area sensor, a transparent medium having a spherical vessel, an opaque ball housed in a spherical vessel, and transparent liquid covering space of the vessel. The opaque ball can move freely, since the ball is housed in the vessel with sufficient liquid room. Therefore, by locating a weight in the center of the opaque ball, the ball moves under the influence of the acceleration of gravity and motion so as to indicate their combined direction. Then, the ball reaches its equilibrium state. The position of the ball in the vessel is determined by using an image (a) (61 R Y AREA SA Rd SENSOR, 4A TRANSPARENT I VESSEL Y-i I 1 I L POINT SOURCE OF I LIGHT Fig.1. Relationships between direction of the ball and its projected image. 390 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. in the photosensitive area of the area sensor, since the image is obtained by applying lights from a point source of light to the vessel. Therefore, the direction in which the ball exists, i.e., the combined direction of the acceleration, is measured. This is an outline of the measurement principle. Next, we explain the details of this principle. For reducing the complexity of the projected image, the conditions should be such that both the planes of incidence and refraction of a transparent medium are parallel and also refractive indices of both the transparent medium and the transparent liquid are the same. By adopting these conditions, the image is simplified to become nearly equal to an ellipse which distinguishes between interior darkness and exterior brightness in the photosensitive area. Based on these conditions, the image contour as shown in Fig.l(a) is formed by a set of lights touching the opaque ball, and the light arriving at the farthest point from origin 0' of the photosensitive area takes the route as shown in Fig.l(b). In Fig.1, the direction of the ball is expressed by parameters a and P. In order to determine these two parameters, we use Rd and Rs data which are defined as the farthest distance and the nearest distance from origin 0' to the elliptic contour in the photosensitive area. Rectangular coordinates (Rdxo,Rdxo) of the point related to Rd data are also used. It proves that the contour positions related to Rd and R, data are located opposite directions from origin 0'. Symbolic notation D denotes the summation of Rd and R,. Notice that the angle of incidence, pi, between the incident light and the normal is calculated from the following equation (see[3]), Eltan4p~+E2tan30~+E3tan2p~+E~tanp~+Eg=0, (1) where El,E2,*.,E5 are constants related to Rd data. But in our sensor design, the valid condition is n14 2 n14 2 ( n, )(l+Cot2p~)-l= ( n, ) cot2pi, where 94 ’ n5 are the refractive indices of the transparent medium and surrounding medium, respectively. This approximation yields a tangential component of po expressed simply as tanpO= ( (3) where p0 is the angle of refraction. The unknown parameter pi is thus uniquely obtained by solving the equation pi = tan-l Rd W1-W3+WW4)+ (ng/qi+) W3+W4) ’ (4) The value of pi is used to compute the center position of the opaque ball in the coordinate system (X,Y). In fact, its axial component along the X-axis is obtained by solving a quadratic equation (see eq. (25) in [3]), and then, the axial component along the Y-axis is obtained. But in general, two solutions, i.e., Gl (~1 ,yl) and G~(x~,Y~), are obtained (YI2312L since the ball touches light path L in Fig.l(b) in the upperside and downside of the vessel. When the ball is near to the light source, its projected image becomes large, but when the ball is far from the light source, the image becomes small. When light path L is in contact with the sectional circle of the vessel, the center position of the ball is determined uniquely since two points, upperside and downside, join together. This critical position is physically determined. We use symbol DC for the value of D in the critical position. The value of Dc is calculated from DC={: (W,-W3+W2-WL+)+W3+WL+)2W52 R3 /{W:- (Rz-R~)~ }//h2- b2 , (5) where R2 and R, are radii of the vessel and the opaque ball, respectively. And W, is the distance between center 0 and the position of an observed light source such that W5 = W3 + (n14 /n5) (WI - W3 > l (6) By using the physically determined value DC, the true position of the ball is chosen from the candidates (xl,yl) and (x2,y2). That is, candidate (~1 ,yl) is correct when the D data is less than D CP but candidate (x2,y2) is correct when the data is greater or equal to DC. Thus, true position of the ball can be decided. Denote the position with (X,Y>, then two parameters expressing the direction of the opaque ball are given by a = cos -' (y/(R2-% >> ; (7) fi = tar+- ( Rdzo / Rdxo > . (8) III DESIGN OF SENSOR HEAD ---- Sensor head design is sketched as shown in Fig.2. The magnitude of the image becomes large because of the light radiating from a point. But in reality, the sensor is limited to a small extent in its photosensitive area. Therefore, the screen and ball-shaped optical lens are used for image reduction. The screen is translucent and attached to the refracting surface of the transparent medium. The lens casts the image on the screen to the area sensor without protruding from its photosensitive area. A point source of light is realized by placing a pinhole closely to the lamp. A pinhole is bored into the metalic board having a thickness of O.O5mm, with a radius of 0.35mm. Cylindrical bakelite connects the board with the transparent medium. These components are joined with adhesive material. The transparent medium is made of BK7 glass with a refractive index of nd=l.51635. The spherical vessel is Fig.2. Design sketch of the sensor head. Fig.3. View of the sensor head. produced by pasting two 6mm thick plates of glass having hemispheres in their centers. In the pasting process, a ball of steel with a radius of 2.375mm which is in wide use as a ball bearing is captured; and thus, the media having a spherical vessel with a radius of 4mm in which the opaque ball is housed is made. Notice that the liquid in the vessel is considered to be equal to its surrounding medium in the refractive index, in addition to its transparency. Such liquids as cedar oil and anisole are considered for this use, since their refractive indices are nearly equal to that of BK7 glass. But these are extremely different in viscosity. That is, the former is sticky and the latter is lacking in stickness. It has been proven that these liquids fail to react chemically and are miscible with each other. Therefore, the liquid containing one part of cedar oil and two parts of anisole is obtained and used to fill the space of the vessel. rectangular groove of One plate of glass has a 0.8x0.8 mm for the injection. By using an injector, the space of the vessel is filled with the blended liquid. Outer end of the groove is adhered with a silicon film so as to seal the liquid from evapolation. The blended liquid is shown to be satisfactory for the aforementioned conditions and suitable for damping the mobility of the ball. When light from the pinhole enters into the transparent medium of the glass, the light runs straight in the media including the vessel since the refractive indices of the glass, liquid and adhesive connecting two plates of glass are in the neighborhood of 1.51. Optically diffusible 0.02mm thick polyethylene film is attached to one side of the medium from which the light comes out. This film acts as a screen. Also, the film makes it possible to visualize the image on the screen from behind, since the film is translucent. Thus, a silhouette of an opaque ball is brought about on the screen and the image on the screen is optically available for succeeding process. The above-mentioned parts are in rectangular box A in Fig.2. In box B adjacent to box A, an optical lens is installed to cast the image on the screen to an area sensor in a reduced form. The lens is BaK4 glass, with a refractive index of nd=1.58663, having a ball shape with a radius of 7.9mm. The distances from the center of the lens to the screen and to the photosensitive area are Wg=39.05mm and W7=13.69mm. In box C, an area sensor is installed having a photosensitive area of 4.1x4.1mm2. These boxes are screwed on to be arranged on a line connecting the centers of the pinhole, spherical vessel, ball lens, and area sensor. A view of the sensor head is shown in Fig.3. The dimensions of the head is 38mmx38mmx14Omm. IV CALCULATION PROCESS By using a ball lens, a reduced copy of the image on the screen is cast onto the area sensor. But in reality, nonlinearlity in the image reduction is recognized since the reduction rate depends on the height of the image from the optical axis (radiate length). That is, the rate becomes large as the image on the screen separates from the axis. This is caused by the spherical aberration of the lens, and this fact implies that the true position of the image on the screen is not obtained by simple magnification. Thus, the height-dependent reduction rate is obtained. Consider parameter h and magnifying factor Cr for the height of the image on the photosensitive area and the reciprocal of the reduction rate, respectively. These relationships can be expressed in the form Cr=C2 / (h+Cl), where Cl and C2 are constants. The values of Cl and C2 are determined by applying the least square method to the data obtained in the experiment prior to the practical direction measurement. Noted that the area sensor has a Pyrex window with a thickness of 0.7mm and a refractive index of nd=l.474. This window serves for scaling down the image on the photosensitive area, however, its reduction rate is considered in determining relationships between h and Cr. Thus, factor Cr means the reciprocal of the total reductions related to the lens and window. In a practical measurement, the data of Rd, (Rdxo,Rdzo), and R, used in section II are given by products of the Cr value and original data of rd, (rdxo,rdzo), and rs which are detected from the sensor in much the same way as the data of Rd, (Rdxo,Rdzo), and Rs are obtained. The flowchart to calculate the two parameters, a and 6, is shown in Fig.4. 392 Start \L Obtain rd, (rdxo,rdzo)lrs ICalculate rdxC,(rd) to obtain Ml Calculate rsxCr(rs) to obtain R, 4 Calculate pi from es.(d) by using Rd ISolve binominal eq. and obtain coordinates G1 (Xl,Yl) rG2(~2 rY2)r where ~1 LYZ a = cos -1(y/(R2-R3)), B=tan -1 (rdzo/rdxo) -7 Fig.4. Flowchart for calculating two parameters cx and 8. V RABORATORY RESULTS In the experiment, the two parameters, W2 and WI+, in Fig.l(b) are equal and the other parameters have such values that WI=17.0, W2=W3=W4=6.01 W,=22.676, W6=39.05, W7=13.69, R2=4.0, R3=2.375 in millimeters, and that n1$=1.516, ng=l.O. The experimental results of the direction-of-action measurement in a static condition are shown in Fig.5. In the figure, angles a and 8 are plotted on the radial and circumferential scales, respectively. The origins of the arrows denote the calculated data, but the heads of these arrows point out the experimental results. Thus, the segments of these lines with arrows mean the angular errors. In the experiment, the approximate equation (2) is utilized since the left and right terms are 18.75 and 17.44 in the maximum amount. The result obtained under this condition shows that the errors of angles a and B are within 12 degrees in all directions. The time required to measure the direction is within 500 milliseconds. VI CONCLUSIONS AND FUTURE APPLICATIONS -- An overall direction-of-action sensor has been developed. In the experiment, the direction is measured under static conditions, that is, the sensor is used as a gravity sensor. Experimental results have confirmed that measurement error is within 12 degrees in all directions and the time required is within 500 milliseconds. The ball-shaped lens is used in the design of the sensor head since the area sensor is limited to a small extent in its photosensitive area. If Fig.5. Experimental results of the direction measurement. the area is wide, the lens is not needed anymore. Then, the sensor could be revised with better accuracy, signal processing which is more simplified, a reduction in size, thus making it more useful. The direction of the gravitational field is utilized to detect the direction of the end effector of a robot. Therefore, the sensor is useful in controlling the direction of the robot. In particular, it is effective in such a robot having more degrees-of-freedom in motion. Also, the sensor would be applied to systems like platforms, inverted pendulums, and legs, since the direction of the gravitational field is inevitable for preventing them from inclining, shaking, and falling down. ACKNOWLEDGEMENT. I would like to thank the members of the robotics group at the Electrotechnical Laboratory for their helpful discussions. REFERENCES [1] Nitzan, D. "Robotic Sensors in Programmable Automation," Technical Note 183, SRI International, Menlo Park, California, March 1979. [2] Harmon, L. D. "A Sense of Touch Begins to Gather Momentum." Sensor Review 1:2 (1981) - - 82-88. [3] Okada, T. "New Methods for Direction Sensors by Using Projected Image" In Proc. ISIR-81. Tokyo, Japan, October, 1981, pp.179-186. 393
1982
95
195
AN EFFICIENT COLLISION WARNING ALGORITHM FOR ROBOT ARMS E. Gordon Powell Naval Surface Weapons Center White Oak, Silver Spring, Maryland 20910 ABSTRACT A hashing technique is used as an arm collision warning algorithm for two armed robots which can be expected to be a factor 103 more efficient in memory resources than a direct geometrical calculation. Computational savings are also achieved. I. INTRODUCTION A. Motivation The warning of imminent collision, one with another, by two arms of a robot is a practical problem with considerable economic interest. We present here a warning algorithm which is efficient in memory resources and in computational requirements. Assumed in all the following is the existence of an algorithm which proposes a new arm position which is at no point further away from the old position than the resolution capabilities of the warning algorithms to follow. We introduce the efficient algorithm by first considering an intuitive but inefficient algorithm II. THE DIRECT APPROACH A. Direct Geometry To avoid collisions no part of two objects should occupy the same space at the same time. With the volume of an object (a robot arm) represented by the union of volumes of a finite covering of spheres, cubes, prisms, et cetera, and with the Euclidian space, E, within reach of the arms represented by contiguous cells (conveniently cubes) then we have the following trivial but workable algorithm: 1. Direct Algorithm a. For every element of a finite covering of one arm, mark every cell in E which is touched by the covering element. b. At the proposed position of the other arm, for every element of the arms covering, check whether any portion of it lies within a cell of E previously marked by the other arm. If so for any element of the covering then collision is predicted for the proposed arm position. B. Memory cells If, E, the working volume of the two arms is of the order 107 cm3 and if about a one centi- meter warning distance is required then algorithm A requires 107 memory cells. III. A BETTER WAY A. The Indirect Method We note that the volume of the arms them- selves is likely to be of the order 1Ocm x 10 cm x 100 cm, or a factor of 103 less than the volume of E. This immediately suggests the use of a hashing function to compress the ratio of volumes with consequent reduction in memory requirements. Thus we are lead to the following philosophy: To avoid collisions no part of two objects should occupy the same part of hashed space unless a direct geometrical check has been made to determine that only a statistical collision exists. If the coordinates of each cell of E are hash mapped to H(E) then we have the following algorithm: 1. The Hashed Algorithm For every element of a finite covering of arm A:.deposit in every bucket of H(E) the label of each element of finite covering which maps to that bucket. b. At the proposed position of the arm B, for every element of its finite covering, check the corresponding bucket of H(E); if no one of the arm A coverings denoted by the possibly nil set of labels is physically coincident as checked by direct geometric calculations then no collision exists. B. Advantages With a l/k loading of the hashed space then only one in k statistical collisions leads to the computational burden of direct geometrical calculation. But the direct geometrical calculation is made efficient since only a small subset of the relevant coverings are considered. 394 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. Thus the retrieval properties of hashing minimize the square law explosion in covering combinations on each arm which must be considered. This same technique can be used to efficiently check for collisions of arms with other obstacles in E provided the volume of the obstacles is much less than the volume of E. ACKNOWLEDGEMENTS We were led to consider this problem by a seminar at the Naval Laboratory for Applied Research in Artificial Intelligence at which Prof. Jacob Schwartz of the Courant Institute, N.Y.U., emphasized the importance of the arm collision problem. Prof. Schwartz is in no way responsible for any deficiencies in this algorithm. * This work was supported in part by the Office of Naval Research, Work Request N001482WR20134, Dr. Alan Meyerowitz, Code 411, Project Monitor. 395
1982
96
196
STABILITY OF AUTOMATIC GUIDANCE FOR A MOBILE ROBOT L. Y. Shih Division of Electrical Engineering, National Research Council, Ottawa, Canada ABSTRACT A control law for the automatic guidance is proposed in this paper as if each obstacle exerts upon the mobile robot a repulsion, which varies inversely with the distance between the robot and the obstacle, and becomes infinite as the robot approaches the obstacle. Stability of the automatic guidance is studied when the mobile robot deviates from the ideal path. The results lead to criteria of selecting control parameters for better guidance. I. INTRODUCTION Development of mobile robots, equipped with microprocessors and sensory devices, has attracted great attention in current research [l-4]. The need for automatic guidance and control of such vehicles emerges from being powerful and potential means for those tasks which are repetitious, dangerous or unattainable, such as industrial conveyance, handling of radioactive or expolsive materials, fire-fighting, underwater or interplanetary missions, blind aids, and many other purposes. Several types of automatic guidance system were proposed, and a number of mobile robots have constructed for research and development. The mobile robot, considered in this paper, is aimed to travel in a plane universe, avoiding collisions with fixed obstacles and wall on route. It consists of a platform, moved by two driving wheels and balanced by two castors. II. CONTROL LAW To guide automatically the robot through a collision-free passage in an environment of obstacles, a control law may be devised as if each obstacle exerts upon the robot a repulsion, which varies inversely with the distance between the robot and the obstacle, and becomes infinite as the robot approaches the obstacle. Mathematically we propose WR-%e =.- = WR +wL = 1 ev XV$da, (1) O(r) = C(r/ro - 1)-n, (2) where the robot, as shown in Fig. 1, of a nominal Fig. 1. Interaction of the mobile robot and an obstacle radius r. (defined as the external limit for collision) travels in a potential field 9 with a velocity v; wR and WL denote the angular speeds of right and left wheels; e denotes the unit vector, and (r, cr) specify the position of an obstacle relative to the robot. r indicates the turning moment about the vertical axis; negative r for right turns and positive I' for left turns. There are two control parameters, both of positive values: n and K, where K is related to C by K = nC/ro for mathematical convenience. Large values of n intend to reduce sensitivity of the information collected on distant obstacles and thus make the robot "near-sighted", while large values of K lead to sharp turns. 396 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. Let the position of robot be specified by the coordinates (x, y) in a reference frame fixed to the environment. From the geometry of path, as shown in Fig. 2, we write path is on the centerline of passage. Suppose that for some reason the robot deviates from the ideal path in both position and direction, in this section we shall study how the robot is guided back to the ideal path. dx = ds sin 0, dy = ds cos 0. (3) Fig. 2. Geometry of the path If the radius of curvature, R, of the path is defined to be positive for left turns and negative for right turns, then wR/(R + $b) = wL/(R - hb), where b denotes the distance between two driving wheels. Substituting this expression into equation (l), we have r = 5 b/R. (4) But the infinitesimal length of the path may be expressed as ds *= -R do, where ds is positive in the direction of robotic motion. Substituting this relation into equation(4) to eliminate R, we obtain dG/ds = -(2/b)r. (5) To determine geometry of the path, equations (3) and (5) should be solved together with expressions (1) and (2). III. STABILITY OF GUIDANCE Let us consider a straight passage bounded by two infinitely long walls. Apparently the ideal c 0 ,X Fig. 3. Deviation from the ideal path As shown in Fig. 3, without loss of generality the reference frame may be fixed in such a way that the Y-axis coincides with the centerline of the passage with y = 0 initially. Let 0 denote the angular deviation of robot, P the range of sensor, and 2w the perpendicular distance between two walls. The angular limits of two walls cxl and a2 are defined by cos ai = (w - x)/p, cos a2 = (w + x)/p. (6) Following expressions (1) and (2), we write -(n + 1) r = K i*:, ( w'x- 1) cos(a + O)da rocos a -Kjz2( w+x _ 1 > -(n + 1) cos(a' - O)da'. rO cos a’ (7) For small deviations, we assume that x/w, 0, rob, and w/p are small quantities of the same order. Applying the technique of small perturbations to expression (4), an approximate expression of the first order may be written as 397 n+l r = 2(n + l)K(>) sin 0 [- + A(n + 2) G]cos 0 (8) where the function A(n) is defined as A(n) = J"o/2 cos"a da. For the environment under consideration, it is more convenient to use y as the independent variable. We may combine equations (3) and (5) to eliminate ds, and with the substitution of expression (8) we write dx/dy = tan 0, (9) dO = w - $ (n + 1)K @..ln •t ' [$$ + A(n + 2);]. (10) Equations (9) and (10) may be solved to determine the geometry of path. For 0 << 1 as assumed, the linearized solutions may be obtained as follows: X = exp(-X&[ x,cosh(k 4;) + xx, + woo k sinh(k Q], (11) 0 = exp(-lg)[Qocosh(k G) - X0, + (X2 - k2)xo/w k Ii sinh(k t;)], (12) where the parameters X and k are defined as +zn+‘K ro (‘oln, n+2 b (13) k2 = X2 - 2(n + 1) A(n)X. (14) Equation (14) indicates that k is imaginary when K(ro/b)(r,/w>” < (n + 2) A(n)* (15) IV. NUMERICAL RESULTS AND CONCLUSIONS As a numerical illustration to solution (ll), we choose the initial conditions x,/w = 0.1 and 00 = 10 degrees. Let us assume that n = 1; for this value A(n) = 1. Equation (14) indicates that when A = 4, k is zero; in this case the path is represented by the dotted curve in Fig. 4. Two other paths for X = 8 and 2, corresponding to a real and an imaginary k respectively, are also depicted in this figure by the solid curves. The figure reveals that an imaginary k provides poor guidance. Next, we shall consider the cases of large deviations while linearization and small perturbation methods are not applicable. Equations (3) and (5) are solved numerically with r given by expression (7) for the initial conditions xo/w = $ and 0, = 20 degrees. The results for n = 1 and K ro/w = 9, as shown in Fig. 5, agree with those from the small-perturbation method in general pattern. 1.L I.2 p ‘-(I > 0.6 0.f 0.4 0.2 I.8 -0.02 0 0.04 0.08 0,12 x/w Fig. 4. Returning paths to the ideal path indicate stability of guidance for n = 1 and various A Conclusions may be drawn from the results obtained that the control parameters should be selected based on the following criteria: (a) To avoid short sight of the robot, small n is desirable. (b) To avoid enormous centrifugal force resulted from a sharp turn, small K is desirable. (c) To ensure better guidance the imaginary k should be avoided; this defines the minimum K which is set by the value of n as shown by inequality (15). This in turn requires a small n. (d) In order to reduce processing time which is essential for a smooth movement, simplicity of the mathematical evaluation is highly desirable. Thus, in normal conditions we should set k to be zero. 396 1.8 1.6 1.2 < 1.0 h 0.8 0.6 0.4 0.2 I 0.008 0 Fig. 5. 0.1 0.2 0.3 0.4 0.5 0.6 x/w Returning path to the ideal path for n = 1 and Kr,/w = 9 REFERENCES [l] Giralt, G., R. Sobek, and R. Chatila "A Multi- Level Planning and Navigation System for a Mobile Robot." In Proc. IJCAI-79. Tokyo, Japan, Aug. 1979, pp. 335-337. [2] Marce, L., M. Julliere, and H. Place "An Autonomous Computer-Controlled Vehicle." In Proc. 1st Int. Conf. on Automated Guided Vehicle Systems. Stratford-upon-Avon, U.K., June 1981, pp. 113-122. [3] Moravec, H.P. "Rover Visual Obstacle Avoidance." In Proc. IJCAI-81. Vancouver, Canada, Aug. 1981, pp. 785-790. [4] Fujii, S., K. Yoshimoto, H. Ohtsuki, H. Kikuchi, H. Matsumoto, and K. Yamada "Computer Control of a Locomotive Robot with Visual Feedback." In Proc. 11th Int. Symp. on Industrial Robots. Tokyo, Japan, Oct. 1981, pp. 219-226. 399
1982
97
197
Automatic Schema Acquisition in a Natural Language Environment Gerald DeJong Coordinated Science Laboratory University of Illinois Urbana, IL 61820 ABSTRACT This paper outlines an approach to schema acquisition. The approach, called ~ schema acauisition is applicable in problems solv- ing situations and is heavily knowledge-based. Basically, learning is viewed as a fundamental part of the understanding process. Understanding a situation for which there is no existing schema involves generalizing the new event into a nascent schema. The new schema is then available to aid in future processing and can be further refined via that processing. This approach to learning is unique in several respects: it is not inductive and so is capable of one trial learning, it does not depend on failures to drive the learning process, and it is incremental and learns comparitively slowly. The learning procedure is outlined briefly with an example, a taxonomy of situations involving explanatory schema acquisition is given, and there is a brief discussion on the scope of the learning mechanism. 1. Introduction The concept of knowledge chunks, variously termed schemas, scripts, frames or MOPS has emerged to organize world knowledge in artificial intelli- gence systems. They have been used to understand natural language, metaphor processing, memory organization, story summarizing, and planning. Yet there has been little work on how these constructs are acquired; most systems simply "build in" the requisite knowledge structures. Now that we have had some experience with this knowledge representation form, it is appropriate to ask how these knowledge structures might be acquired by AI systems automatically. In the remainder of the paper I will use the least conten- tious term "schema" to refer to these knowledge chunks. There are many reasons why learning should be a part of a natural language system. On the theoretical side, the ability to benefit from experience is a necessary and indeed a defining characteristic of intelligence. More practically, This work was supported in part by the National Science Foundation under grant NSF IST 81-20254 the ability to learn would be of great advantage to a system that deals with real world input texts. This was made apparent from experience with the FRUMP system C13. While it was a very successful program, its weakest link was its limited number of schemas. Each of its schema strubtures (called sketch scripts) had to be added by hand. 2. Overview Schemas are used in natural language process- ing to supply missing inferences to connect the input propositions logically. A text is input to the system. In addition, some systems (e.g. FRUMP) also used schemas to aid in word and sentence interpretation. What can a natural language system do if it does not have an appropriate schema for understand- ing a new input text? In certain circumstances, the system can process the input anyway and in doing so acquire the missing schema. Most schemas are themselves composed of other schemas. If a text describes a new situation on the level of these constituent schemas then a natural language understander can process the input by relying on general goal and planning background knowledge. This is not contentious. Most planning-type natural language systems do precisely this (e.g., c71 C91). It is important to notice that the story representation arrived at though planning can be viewed as a schema itself, albeit a poor, narrow, and over-constrained one. That is, the process of re-reading this particular text would be immensely simplified if the system had access to the previ- ously constructed representation. The predictions made by this "overly-specialized schema" would be exactly on target. The problem, of course, is that any modification, no matter how slight, of the story would directly contradict the new schema, making it inapplicable to the modified text. The important insight, and the basis for explanatory schema acquisition, is that the "overly-specialized schema" for an event can be generalized by a knowledge-based system to be a plausible and useful schema. Moreover, the knowledge required for this generalization process is precisely the knowledge that is used by and present in planning systems. See 121 for a more complete overview and an example. Two large prob- 410 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. lems must be addressed. First, when is the gen- eralization process invoked? and how does generali- zation occur? 3. Situations Acquisition that Invoke Explanatory There are four situations which when recog- nized in the text either individually or in combi- nation ought to invoke the generalization routines. They are: 1) Schema Composition 2) Secondary Effect Elevation 3) Schema Alteration 4) Volitionalization 3.1 Schema Composition Basically, schema composition involves con- necting known schemas in a novel way. Typically, this will involve a primary schema, essentially unchanged, with one or more of its preconditions satisfied in a novel way by other known schemas. For example, a RANSOM schema is a combination of THEFT and BARGAIN. The primary schema is BAR- GAIN. It is used in the normal way (i.e., to obtain something of value by trading something one values less). The THEFT schema satisfies the precondition of possessing the thing that one plans to trade away. Thus, THEFT is used, somewhat pecu- liarly, to obtain an object that is itself not necessarily valued by the thief. 3.2 Secondary Effect Elevation Secondary effect elevation involves acquiring a new problem solving construct (schema) which is nearly the same as an existing schema but whose main effect is only a side effect in the original schema. Consider the following scenario in which Fred uses secondary effect elevation to acquire a schema to solve his problem. Fred wanted to date only Sue, but Sue stead- fastly refused his overtures. Fred was on the verge of giving up when he saw what hap- pened to his friend, John: John wanted to date Mary but she also refused. John started seeing Wilma. Mary became jealous and the next time he asked her, Mary eagerly accepted. Fred told Sue that he was going to make a date with Lisa. Here Fred has has used an existing schema (DATE) in a new way. The main purpose of the DATE schema is to satisfy certain recurring social goals (like companionship, sex, etc.). DATE contains secondary effects as well. These are often undesirable effects accompanying the main, planned effects. For example, one is usually monetarily poorer after a date. Another secondary effect is that if one has an old girl friend, she may become jealous of a new date. What Fred learned from observing John's experience is that it is occasionally useful to invoke the DATE schema in order to cause one of its secondary effects (jealousy) while completely ignoring the usual main goal. 3.3 Schema Alteration Schema alteration involves modifying a nearly correct schema so that it fits the requirements of a new situation. The alteration process is guided by the system's world model. This is illustrated by the following brief anecdote: Recently I had occasion to replace tem- porarily a broken window in my back door with a plywood panel. The plywood sheet from which the panel was to be cut had a "good" side and a "badn side (as does most raw lumber). The good side was reasonably smooth while the bad side had several ruts and knot holes. I automatically examined both sides of the sheet (presumably as part of my SAWING or CUTTING-A-BOARD-TO-FIT schema) and selected the good side to face into the house with the bad side to be exposed to the ele- ments. After I had cut the panel and fitted it in place I noticed that several splinters had been torn out leaving ruts in the "good" side. I immediately saw the problem. Hand saws only cut in one direction. With hand saws, the downward motion does the cutting while the upward motion only repositions the cutting blade for another downward motion. I had cut the wood panel with the "good" side facing down. The downward cutting action has a tendency to tear splinters of wood out of the lower surface of the board. This is not a problem on the upper surface because that wood is supported from below by other wood. Since the good side was the lower surface, it suffered the loss of splinters. If I had to perform the same action again, I would not make the same mistake. I would cut the board with the good side facing up. How- ever, what I learned was not just a simple special- ized patch to handle this particular instance of splintering. Since I knew the cause of the splintering, I knew that it would not always be a problem: it is only a problem when 1) the lumber is prone to splintering, 2) there is a "good" side of the board that is to be preserved, and 3) one is making a crosscut (across the wood's grain) rather than a rip cut (along the grain). Moreover, the solution is not always to position the wood with the good side up. My electric saber saw (also a reciprocating saw) cuts during the upward blade motion rather than the downward motion. Clearly, the solution when using the saber saw is the oppo- site: to position the board with the good side down. Now, these are not hard and fast rules: with a sufficiently poor quality sheet of plywood splintering would likely always be a problem. Rather, these are useful heuristics that lead to a refinement of the SAWING schema. 3.4 Volitionalization As the name implies this situation involves transforming a schema for which there is no planner (like VEHICLE-ACCIDENT, ROULETTE, etc.) into a schema which can be used be a planner to attain a 411 specific goal. Consider the following story: Herman was his grandfather's only living relative. When Herman's business was failing he decided to ask his grandfather for a loan. They had never been close but his grandfather was a rich man and Herman knew he could spare the money. When his grandfather refused, Herman decided he would do the old fellow in. He gave him a vintage bottle of wine spiked with arsenic. His grandfather died. Herman inherited several million dollars and lived happily ever after. This story is a paraphrase of innumerable mys- tery stories and illustrates a schema familiar to all who-done-it readers. It might be called the HEIR-ELIMINATES-BENEFACTOR schema. It is derivable via volitionalization by modifying the existing non-volitional schema INHERIT. INHERIT is non- volitional since there is no active agent. The schema simply dictates what happens to a persons possessions when he dies. In this guise volitionalization parallels schema composition. One of the preconditions to INHERIT is that the individual be dead. The ELIMINATE-BENEFACTOR schema uses the schema MURDER to accomplish this. One major difference is that schema composition requires all volitional schemas. This parallelism need not always be present, how- ever. Non-volitional to volitional transformation is also applicable to removing stochastic causal steps from a schema resulting in a volitional one. 4. The Generalization Process The generalization process is based on certain data dependency links established during under- standing. After a story is understood, the under- stood representation can be viewed as an e m of why the events are plausible. For example, take the case of a kidnapping. KIDNAP is an instance of schema composition, not unlike RANSOM. Thus, the first kidnapping story seen by the system is understood as a THEFT followed by a BARGAIN. If the kidnapper is successful, the ransom is paid. For a system to understand this, it must justify that the person paying values the safety of the kidnapped victim more that the ransom money. This justification is a data dependency 131 link to some general world knowledge (e.g., that a parent loves his children). Now the event can be generalized so long as these data dependency links are preserved. Clearly, as long as the data dependencies are preserved, the underlying events will still form a believable whole. Consider again the secondary effect elevation example of Fred trying to date Sue. The observed specific instance is John's interactions with Mary. Notice, however, that Fred did not simply copy John's actions. John actually made a date with Wilma while Fred only expressed an intention to date Lisa. This is not an earth-shaking differ- ence, but in the context of dating it is extremely significant. In the normal DATE situation express- ing an intention to date someone is not nearly so satisfying as an actual date. Once modified for the purpose of causing jealousy, however, express- ing an intention for a date and actually carrying it out can be equally effective. That is, they both maintain the data dependency link for why we believe that Sue is in fact jealous. 5. Conclusion There are several concluding points 1) Unlike most learning systems explanatory schema acquisition does not depend on correlational evi- dence. Thus, it is capable of one trial learning. It is somewhat similar to Soloway's view of learn- ing C83. 2) The approach is heavily knowledge-based. A great deal of background knowledge must be present for learning to take place. In this respect expla- natory schema acquisition follows the current trend in AI learning and discovery systems perhaps trace- able to Lenat [51. 3) The learning mechanism is not "failure-driven" as is the MOPS approach C61. In that view learning takes place in response to incorrect predictions by the system. In explanatory acquisition learning can also be stimulated by positive inputs which encounter no particular problems or prediction failures. 4) The absolute representation power of the system is not enhanced by learning new schemas. This statement is only superficially surprising. Indeed, Fodor [41 implies that this must be true of all self-consistent learning systems. Explanatory schema acquisition does, however, increase process- ing efficiency. Since all real-world systems are resource limited, this learning technique does, in fact, increase the system's processing power. Furthermore, it may indicate how Socratic method learning is possible and why the psychological phenomenon of functional fixedness is adaptive. ACKNOWLEDGEMENT I am indebted to Paul O'Rorke for several discus- sions on explanatory schema acquisition. REFERENCES Cl1 DeJong, G. (1979) Skimming stories in real time: an experiment in integrated understanding. Research Report 158, Yale Computer Science Depart- ment, New Haven, CT. 121 DeJong, G. (1979) Prediction and substantia- tion: a new approach to natural language process- ing. -Science 3, 251-273. C3l Doyle, J. (1978) Truth maintenance systmes for problem solving. MIT AI Technical report TR-419, MIT, Cambridge, MA C4l Fodor, J. (1975) m a pf w Thomas Y. Crowell Company, New York 412 C5l Lenat, D., (1976) AM: an artificial intelli- 183 Soloway, E., (1977) Knowledge directed learning gence approach to discovery in mathematics as using multiple levels of description. Ph. D. heuristic search. AIM-286, AI Laboratory, Stanford dissertation, Computer Science Department, Univer- University, Stanford, CA. sity of Massachusetts, Amherst, MA. [61 Schank, R., (1980) Language and memory. Qgt& [9] Wilenski, R. (1978) Understanding goal-base $ive Sciw 4, 243-283. stories, Ph.D. dissertation, Yale Computer Science Report 140, September, Yale University, New Haven, [71 Schmidt, C. and Sridharan, N. (1977) Plan CT. recognition using a hypothesize and revise para- digm: an example. Proceedings of the Fifth Interna- tional Joint Conference on Artificial Intelligence, pp. 480-486. 413
1982
98
198
ACQUISITION OF APPROPRIATE BIAS FOR INDUCTIVE CONCEPT LEARNING * Paul E. Utgoff Tom M. Mitchell Department of Computer Science Rutgers University New Brunswick, New Jersey 08903 Abstract Current approaches to inductive concept learning suffer from a fundamental difficulty; if a fixed language is chosen in which to represent concepts, then in cases where that language is inappropriate, the new concept may be impossible to describe (and therefore to learn). We suggest a framework for automatically extending the language in which concepts are to be expressed. This framework includes multiple sources of knowledge for recommending plausible language extensions. I Introduction We consider concept learning problems in which there is a domain of instances over which concepts (generalizations) are to be learned. A trainer presents a sequence of training instances, each labelled as a positive or negative instance of the concept. The task of the learner is to acquire the ability to correctly state, for every instance in the domain, whether that instance is an example of the concept. For any instance which has not been presented by the trainer, the learner must therefore inductively infer whether the instance is an example of the concept. II Problem and Related Work The inductive inference process is driven by two kinds of information. The first kind of information is classifications of training instances given by the trainer. The second kind of information, which we call bias, is broadly defined as anything which influences how the concept learner draws inductive inferences based on the observed training instances. Without such bias, the inductive inference process can perform no better at classifying unobserved instances than random guessing [S]. We are interested here in the issue of how an “appropriate” bias for inductive learning can be acquired automatically. Bias can be built into a learning system in many ways. For example, Mitchell [4] and Vere [ 101 encode bias via an incomplete concept description language in which only certain partitions of the domain of instances are expressible. Waterman [ 1 11 encodes bias both in his concept description language and in his learning algorithm. Michalski [2, 31 encodes bias, not by limiting his concept description language, but by having a human user state the bias as rules for preferring one concept description to another. *This work was supported by National Science Foundation grant GMCS80-08889. I I I Approach In this paper, we consider concept learning where the bias resides solely in the concept description language. To do this, we impose the constraint that a hypothesis can be described if an only if there exists a corresponding concept description in the learner’s concept description language. Thus, we represent a particular bias by a particular set of describable concepts: the concept description language. This equivalence between the language and bias allows us to operate on bias by operating on the concept description language. Figure Ill- 1 illustrates a framework for revising bias by revising the concept description language. FigurewIll -1: Model of Inductive Concept Learner A. Detecting Inappropriate Bias Bias is appropriate to the extent that it causes the concept learner to draw correct inductive inferences. Bias is strong to the extent that the set of hypotheses to be considered by the concept learner is small. We are examining the process of revising bias by adding new concept descriptions to the learner’s concept description language. Thus the revision process can be viewed as weakening strong bias in an appropriate manner. 414 From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved. Our method of detecting inappropriate bias is to detect incorrect inductive inferences which could have been drawn as a result of the bias. Accordingly, to detect that no consistent description of the observed instances exists in the concept description language is to prove that the bias is inappropriate, assuming that the training instances have been correctly classified by the trainer. B. Revising Inappropriate Bias When the existing concept description language has been identified as providing inappropriate bias, the bias must be revised by adding a new concept description to the language. For the purpose of induction, and to avoid the need for future revisions of bias, it is desirable to formulate a revision which corrects the discovered error and correctly anticipates the classif ications of the unobserved instances. 1. Knowledge Sources for Recommending Revisions We define a knowledge source as a procedure which uses available information (e.g. training instances, existing concept description language, context of concept learning problem) to recommend revisions to the concept description language which will render the concept description language more appropriate. Below, we consider two classes of knowledge sources. One class of knowledge source for recommending language extensions is characterized as “syntactic” because the proposed recommendations are derived only by considering boolean combinations of currently describable concept descriptions. For example, it would be possible to specify a new concept description by creating a disjunction of concept descriptions (A v B v . ..I which correctly classifies the training instances. Various researchers have discussed data-driven methods for describing such disjunctive concepts [ 1, 4, 21. Similarly, it may be possible to specify a new concept description by creating a counterfactual, as per Vere [ 101, of concept descriptions (A A -(B A -...)I which correctly classifies the training instances. derivable via the grammar is a concept description in the concept description language and describes (i.e. matches) the set of terminal strings derivable from tne sentential form. We refer to the grammar as the generalization state 1: Scos7(x)dx I op 1: f’(x+f [r- ’ l(x)f(x) state2: Icos’(x)cos(x)dx 1 op2: fr(x\-(f2(x))[r’21 state3: &cos2(xN3cos(x)dx 1 0~3: cos2(xw( 1 -sin2(x)) state4: I( 1 -sin2(x))3cos(x)dx 1 0~4: Ig(f(x))f’(x)dxeJg(u)du, u=f(x) state5: I( 1 -u213du, u=sin(x) 1 op5: polyk(xH Cpoly(x)* ,...*,poly(x)l state6: J 1 -3u2+3u”-u6du, u=sin(x) Figure 111-3: Solution Path for Jcos7(x)dx hierarchy. Suppose the concept to be learned is “situations for which the solution method shown in figure Ill-3 should be applied to solve a symbolic integration problem”. If -fcos’(x)dx and -fcos7(x)dx are positive instances of this arg) 1 I& uvwxy id k)) Figure Ill -2: A grammar for a concept description language for symbolic integration Let us consider a hand-generated example of a concept learning problem from the task of learning problem-solving heuristics in the domain of symbolic integration [S]. The concept description language is specified by the formal grammar shown in figure - 111-2. Every sentential form concept, and .fcos’(x)dx is a negative instance, then there does not exist a concept description in the concept description language which is consistent with these observed instances. In this situation, a knowledge source KS, which produces succinct disjunctions may recommend 415 that the concept description language be revised to allow -5 v describing the disjunction Jcos 7 (x)dx. Similarly, a knowledge source KS, which searches for consistent counterfactuals may suggest revising the language to allow describing Jcosk * ?x)dx. A second class of knowledge sources for suggesting language revisions is characterized as “analytic” because the proposed recommendations are derived by analyzing the context in which the concept learning problem appears. For example, when the concepts to be learned are problem-solving heuristics, then a knowledge source based on analyzing solution traces (e.g. as suggested recently in [7]) is relevant to the task of generating new language terms. This knowledge source KS, uses knowledge about the set of available problem-solving operators, an explicit definition of the expected performance of heuristics, knowledge about heuristic search in general (e.g., definitions of concepts such as “solvable”), and knowledge about the task domain itself. The solution trace analysis from KS, first produces a set of statements about various nodes in the search tree, which characterize reasons why the training instance is positive. These statements are then propagated through the problem- solving operators in the search tree to determine which features of the training instance were necessary to satisfy these statements. It is during this propagation and combination of constraints that new descriptive terms may be suggested. For example, in the case of the solution path shown in figure 111-3, suppose that the analysis determines that the solution path leads to a solution because state6 is of the form which , state. Spoly(x)* , . ..*.poly(x)dx in turn, satisfies the learner’s definition of a solvable Then we can compute the set of states S, for application of op5 leads to such a solvable state as S ,+-op5- ’ upoly(xP, . ..*.poly(x)dx) which i giving 5 = Jpolyk(x)dx. In turn, we can compute the set of states application of op4 leads to a state in S, as S, for which S2+op4- ‘(intersection(rangelop4),S, 1) giving s2 = Jpolyk(f(x))f’(xldx. By this repeated backward propagation of constraints through the solution tree, it can be determined that application of the solution method of figure Ill-3 leads to a solvable state when the initial state (in this case state11 is of the form ScosC(x)dx where c is constrained to satisfy the predicate “real(c) A integer((c- lV2I”. better known as “odd integer” The complete derivation is shown in [9]. Because the current language has no term corresponding to this predicate, the predicate can be used as the definition of a useful term to be added. The approach sketched here is discussed further in [9, 71. We believe this kind of analysis can provide strong guidance for revising the concept description language, and expect that studying this kind of knowledge source will continue to be a major focus of our future work. 2. Using Multiple Knowledge Sources A knowledge source may produce any number of recommendations. Notice that a knowledge source such as KS, which can recommend a disjunctive description of a given set of positive training instances will always produce at least one recommendation. When several recommendations are available from one or more knowledge sources, a merging process must compare and weigh the recommendations against each other. This merging process may be based on many factors, including 1) the probability that a recommendation from a given source is correct, 2) whether the set of recommendations from one knowledge source intersects the set of recommendations from another source, and 3) the strength of the justification provided by the knowledge source for the recommended change (e.g. whether any of the concepts recommended for inclusion in the concept description language has been proven to be a sufficient condition for (subset of) the concept being learned). For the example considered above, it is known that the analysis of the analytical knowledge source KS, computes a sufficient condition for concept membership. Therefore the merging routine can reject the recommendation from KS, in favor of the recommendation from KS, because the recommendation from KS, is a sufficient condition for concept membership and it is more general than that recommended by KS ,. 3. Assimilating a Proposed Change When assimilating recommended changes to the concept description language, revisions of varying strengths may be possible. For example, to add a concept description which describes Scos5(x)dx v Jcos7(x)dx, the disjunction itself could be added to the language as a permitted concept description. Alternatively, a term “k-new” could instead be added which represents the disjunction 5 v 7. This latter revision causes the disjunction 5 v 7 to be considered by the concept learner in contexts other than Scos’(x)dx, and thus provides a more sweeping change to the language. A major difficulty in assimilating new language terms lies in determining where a new term fits relative to existing terms in the generalization hierarchy. The ease with which a new term can be assimilated depends strongly on the form of its definition, It is relatively easy to assimilate a new term which is defined as a boolean combination of existing terms. For example, the new term “trig A -tan” could be assimilated by placing it into the generalization hierarchy below “trig”, and above all descendants of trig except “tan”. As long as the number of descendants of “trig” and “tan” is finite, such expressions can be easily assimilated. In contrast, terms such as “odd integer” present considerable difficulty. While it is clear from the form of the definition that this term is a specialization of the term “r” (real number), it is not readily apparent that it is a specialization of “k” (integer). In order to determine this fact, the assimilator will require some knowledge about the meanings of the terms in this domain, and about the plus 416 and times functions used to define the new term, so that it can determine that every instance of the new term must also be an instance of k. References Cl1 Iba, G. A., ’ ‘Learning disjunctive concepts from If it is not possible to determine the precise location where the new term belongs in the generalization hierarchy, then it may be possible, as a subproblem, to empirically learn the correct location of the new term in the generalization hierarchy. Surprisingly, certain kinds of errors In assimilating a new term into the generalization hierarchy do not prevent correct usage of the new term for inductive learning of subsequent concepts. For example, if “odd- integer” is added to the generalization hierarchy as a sibling of “integer”, rather than as a specialization of “integer”, each remains nevertheless a correctly defined term. Only their relationship to each other remains imprecisely defined. Thus, it is possible to use new terms in certain cases even before they have been correctly assimilated. examples,” Master’s thesis, M.I.T., 1979, also Al memo 548. c21 Michalski, R. S., ’ ‘Pattern recognition as rule-guided inductive inference,” / EEE Transactions on Pattern Analysis and Machine intelligence, Vol. PAMI-2, No. 4, 1980, pp. 349-361. r.31 Michalski, R. S. and Chilausky, R. L., “Learning by being told and learning from examples: an experimental comparison of the two methods of knowledge acquisition in the context of developing an expert system for soybean disease diagnosis,” Policy Analysis and Information Systems, Vol. 4. No. 2, June 1980, Special issue on knowledge acquisition and induction. IV Summary and Open Issues In the above discussion we have suggested a framework for extending the concept description language which drives inductive learning. While boolean combinations of existing terms sometimes lead to appropriate language revisions, they are not likely to produce radically new language terms such as “odd-integer” or “twice-integrable-function”. A knowledge source such as KS, based on knowledge of heuristic terms. search can, on the other hand, lead to such new The framework presented here for revising bias represents our initial approach to this problem. One major issue that we have not discussed is that the recommendations for language change produced by the knowledge sources will often be incorrect To keep the concept description language from becoming “cluttered”, a control strategy is needed for tentatively adding new terms to the language, and later backtracking on a revision if necessary. One L-41 Mitchell, T. M., Version Spaces: An approach to concept learning, Ph.D. dissertation, Stanford University, December 1978, also Stanford CS report STAN-CS-78-7 11, HPP-79-2. c51 Mitchell, T. M., “The need for biases in learning generalizations”, Technical Report CBM-TR- 117, Department of Computer Science, Rutgers University, May 1980. C61 Mitchell, T. M., Utgoff, P. E., Nudel, B. and Banerji, R., ’ ‘Learning problem-solving heuristics through practice,” Proceedings of the Seventh International Joint Conference on Artificial Intelligence, Vancouver, August 198 1, pp. 127- 134. c71 Mitchell, T. M., ’ ‘Toward Combining Empirical and Analytic Methods for Learning Heuristics,” Human and Artificial Intelligence, Elithorn, A. and Banerji, R. (Eds.), Erlbaum, 1982. interesting possibility 7s that control over the generation of training instances may provide a method for resolving ambiguities about appropriate changes to the language. Where an ambiguity exists, a training instance can be generated which,- when classified by the trainer [8] will resolve the ambiguity. C81 cg1 Sammut, C., Learning Concepts by Performing Experiments, Ph.D. dissertation, University of New South Wales, November 198 1. Utgoff, P. E., “Acquisition of Appropriate Bias for Inductive Concept Learning”, Thesis Proposal, Given the relative strength of the analytical knowledge source described here, one may wonder whether the entire learning process could be based on this knowledge source, removing the need for inductive inference based on syntactic comparisons of training instances. From our initial case studies, it is clear that while an analytical knowledge source, such as KS,, of the learner is an important component, the analysis is generally too difficult to be relied upon exclusively for inferring heuristics. For that reason, we are currently working on combining analytical and empirical methods [7] for concept learning. Thus, we believe that even when such knowledge sources are available to guide learning, empirical induction based upon bias will still .be a necessary element, and that the kind of revision of bias discussed here will be necessary for effective inductive inference. Acknowledgments Department of Computer University, May 1982. Science, Rutgers [ 1 O] Vere, S. A, \ ‘Multilevel counterfactuals for generalizations of relational concepts and productions,” Artificial Intelligence, Vol. 14, No. 2, September 1980, pp. 138- 164. Cl 11 Waterman, D. A., ’ ‘Generalization learning techniques for automating the learning of heuristics,” Artificial Intelligence, Vol. 1, No. 112, 1970, pp. 121-170. We thank N.S. Sridharan, Ranan Banerji, Saul Amarel, Robert L. Smith Jr., Rich Keller, and Pat Schooley for the many discussions which led to ideas presented here. We thank Alex Borgida, John Kastner, Van Kelly, and Peter Spool for helpful suggestions and reading of earlier drafts. 417
1982
99
199
A COMPARATIVE STUDY OF COi>JTROL STRATEGIES FOR EXPERT SYSTEMS: AGE IMPLEMENTATION OF THREE VARIATIONS OF PUFF Nelleke Aiello Heuristic Programming Project Department of Computer Science Stanford University Stanford, California 94305 Abstract This paper presents the results of comparing three control strategies for expert systems: event driven, expectation driven, and goal driven. Three different versions of a pulmonary function analysis system (PUFF) were implemented, each with a different control strategy. The systems are described and compared for efficiency and naturalness. The advantages and disadvantages of each strategy are discussed. The reasons why one approach, the expectation-drive strategy, is best suited for the PUFF application are summarized. I. Intrsductiorl Three versions of the PUFF pulmonary function analysis system are compared here, each using the same domain knowledge (represented as production rules) but different control strategies. The purpose of this research is to analyze these control strategies and the resulting programs and to examine what knowledge the knowledge engineer has or needs when choosing a control strategy for a new expert system. The three different control strategies are described briefly as follows. Event driven (also called data driven) consists of input data or earlier events that lead to the invocation of rule sets that generate further events. With a Model driven strategy, the system matches the current “state of the world” against disease models to generate expectations. Further actions are taken based on expected symptoms. Backchaining is a goal-driven strategy in which a goal rule invokes all the rules whose conclusions are referenced by the conditions of that goal rule. These rules in turn invoke relevant rules in a chain until rules that reference only the input data are reached. The three PUFF systems were developed using the AGE system for building expert systems. A user of AGE can define production rules about a particular domain, set up a basic structure for a solution space, and then experiment with different control strategies to find one that best fits the problem. There are several dimensions along which the best fit can be determined, both subjective and objective. Subjective rn-asurements include an indication of how natural the knowledge represented and the output of the program seem to the expert. Objective measurements are speed and accuracy in terms of the total number of rules evaluated or comparisons with the physician’s diagnosis. 1 .l. PUFF Tasks The basic task of PUFF is the interpretation of pulmonary fUrdOn tests. These standard tests include measuring lung volume, that is, the volume of air and the rate at which it can be forcibly exhaled and the capacity to inspire a !arge volume of air. These and other laboratory measurements, as well as data about the patient’s history and referral diagnoses are interpreted by production rules. The result is a set of interpretations and conclusions and a pulmonary function diagnosis, similar to those a doctor would produce, given the same initial data. The original PUFF system (Kunz et al., 1978) was a rule-based system that was designed with the EMYCIN system (van Melle, 1980) for building expert consultation systems. CENTAUR (Aikens, 1980) is yet another version of PUFF, using frames to represent prototypes of particular diseases and subtypes. The three versions of PUFF compared for this experiment use the same domain rules as the original PUFF system. 1.2. AGE AGE (Nii et al., 1979), which stands for Attempt to GEneralize, is a collection of tools and partial frameworks for building expert systems. In the examples presented in this paper, we have exploited several AGE features. We were able to experiment with various designs and control strategies using the same domain knowledge. For the purposes of this experiment, we tried to use the same rules, data structures, and input data as far as possible for each system. Only the control strategies were varied. Initialiy, we implemented PUFF as a blackboard model (Erman et al., 1975) with a very simple, event-driven control strategy. It was made into an expec&il:n driven system by adding a few rules for the initial diagnosis model and converting other rules to generate expectations for pulmonary diseases and their typical symptoms. Finally, we added a goal rule and one new attribute in the data structure to allow for the bz;kwnrd chaining of the domain rules. 2. Description of the Programs 2.1. Event-driven The event-driven version of PUFF, AGEPUFF, has a simple blackboard data structure to store the intermediate and final results. There are two levels, the PATIENT level and the DISEASE level. with three instance nodes on the DISEASE level: OAD (Obstructive Airway Disease), RLD (Restrictive Lung Disease), and NORMAL. An initial set of rules (termed a knowledge source in AGE) looks at all the data (the test measurements) and records what tlrose measurements indicate on the blackboard. Each action of recording something on the blackboard is called an event. When all of the relevant rules in the initial knowledge source (KS) trave been evaluated, a user-specified selection method chooses one of those events to use as a focus. This focused event is then matched against the preconditions of all the knowledge sources, and the one whose precondition matches the focused event is invoked next. As an example (see Fig. 2-l), events that indicate some conclusion about the degree of OAD trigger a KS (OAD-SUBTYPERULES) that tries to determine the OAD subtype (Emphysema, Asthma, or From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. Bronchitis). In other words, the events are driving the order in which KSs are evaluated. This process is repeated until some termination condition is met. Ib RLD-DEGREERULES I ,_ > RLD-degree&$-] RLD:OAD OAD-sbbtype OAD-findings Figure 2- 1: AGEPUFF knowledge sources (with circled nodes indicating knowledge sources). The following is an example of a rule from the event-driven PUFF system. if (UETWFEN* ($DATA SLOPE:F!jO/FVC) 22 32) then PROPOSE ev.type OAD-DEGREE ch.type MODIFY hypo-element 'OAD attr-value (DEGREE '(MODERATE .5)) (FINDINGS ‘((FE1025 *F5025* ) 1.01 If the condition of this rule is true, the DEGREE and FINDINGS attributes of OAD will be updated on the blackboard, generating an event of type OAD-DEGREE. This event is added to a list of events and becomes the focus event on a last-in, first-out basis. A knowledge source with a precondition of OAD-DEGREE can then be invoked, and OAD would become the focus node. 2.2. Model Driven AGEPUFFIMODEL is the name of the model-driven version of PUFF implemented in AGE. It uses the expectation control strategy available in AGE to set up models of the pulmonary diseases. The initial diagnosis rules check a limited number of crucial data items and make an initial, broad diagnosis. AGEPUFF/MODEL then attempts to substantiate the initial hypothesis with a few more data items of secondary importance. If the diagnosis is still credible, a set of further model-based expectations is generated for corroborating evidence. AGE compares the expectations with input data and partial hypotheses on the blackboard. Figure 2-2 shows the organization of the knowledge sources for the expectation- driven model. The models of OAD and RLD help to focus the interpretation of the input data. For example, if a patient has a total lung capacity greater than 120 percent of the normalized value for his size and age, it is very likely that he has OAD. So, instead of checking all the available test data, as in the event-driven example, AGEPUFF/MODEL looks first, and possibly only, at other indicators of OAD. If enough other indicators (test data, calculations, patient 3’LbCl #+a +c Expectation Process Expectation Process 1 Yis support OAD-support c"u OAD-subtype<-> & support Figure 2-2: Expectation driven organization of AGEPUFF/MODEL. history, and referral diagnosis) agree with the OAD diagnosis, then the actions that reinforce the initial conclusion are taken; which in turn generate events that trigger knowledge sources about OAD reversibility and OAD subtypes. The following is an example of a rule from the model-driven system. if (EQDEG* ($VALUE 'OAD DEGREE) MODERATE) then EXPECT expr (BETWEEN' ($DATA SLOPE:F50-F25/fVC) 22 32) actions (PROPOSE ev.type OAD-SUPPORT ch.type MODIFY hypo-element 'OAD attr-value ((DEGREE (MODERATE .5)) (FINDINGS '((F6025 *F5025*) 1.01 EXPECT expr (GREATERP" ($DATA FVC/FVC-PRED) 80) (BETWEEN* (DIFFERENCE ($DATA FEVl/FVC) ($DATA PRED:FEVl/FVC)) -25 -15) actions . . . EXPECT . . . When this rule is executed, several expectations are generated. The expectation expressions (expr) are evaluated in a last-in, first- out order and, if true, the actions are taken. These actions can generate events, just as in the event-driven system. As noted above, the expectation-driven approach is really a combination of an event and expectation strategy. No system can function on expectations alone. Events are used to generate information for the expectations and to act when an expectation is met. 2.3. Backchaining The backchained version of PUFF, called PUFFIBC, has or,ly one knowledge source. All the PUFF rules are combined into one large knowledge source. The rules are chained by looking at the conditions of the goal rule and then searching for other rules whose actions might provide values for the goal rule’s conditions. This imposes a more restrictive syntax on the conditions of rules than in the event-driven or expectation-driven approaches. The backward chaining process requires that the conditions be written as predicates, with explicit object and attrlbute names. (In the other two systems, the predicates can be arbitrary LISP expressions.) AGE automatically chains the rules by searching for rules whose actions reference the same object-attribute pairs as the conditions of the goal rule. This chaining process is repeated, with the rule whose actions referred to the desired object and attribute becoming the next subgoal rule, until conditions are rxeached that only reference input data. Figure 2-3: Organization of knowledge sources for Backchaining. An example of a rule from the PUFF/% system follows. if (UErWEEN* DATA SLOPE:F50-F25/FVC 22 32) then object OAD attribute-value pairs (DEGREE ('MODERATE .S))(FINDINGS ((F5025 *F5025* 1.01 in this case, the value of the attribute SLOPE:F50-F25/FVC of the node DATA will be passed on as an argument to the predicate BETWEEN*. When this rule is evaluated, rules referring to DEGREE and FINDINGS of OAD in their predicates can be evaluated next. 3. Results of Comparing the Three Strategies 3.1. Speed The three control strategies were objectively compared for efficiency by measuring the number of rules tested, the number of rules executed, the number of input data items referenced, and the number of references to the blackboard or internal data representation in the backchained strategy. Table 3-l shows the average results for the three strategies for a small sample of actual cases with diagnoses that range from mild to severe cases of disease categories OAD, RLD, and NORMAL, and various combinations thereof. There was very little deviation from the average for each measurement. From this objective measurement it is clear that the model-driven strategy does less testing and refers less often to data than the other two strategies do. The ability to focus initially on the most likely diagnoses eliminates the need to test rules for other diagnoses. A few more rules are executed in the model-driven and goal-driven strategies to set up the expectations or goals. The goal-driven strategy has the most references to internal data because every predicate refers to an object-attribute pair, each rule may have several predicates, and all the rules are tested for each run. 3.2. Accuracy Accuracy was measured as agreement with the doctor’s conclusions, based on the statements PUFF is capable of producing. When the strategies are compared by the number of correct interpretations and diagnoses, the event-driven and goal- driven strategies are slightly more accurate than the model-driven strategy. Given odd or marginal data, the model-driven StrateClY may produce incomplete interpretations. It will not produce incorrect statements, however. The event-driven strategy is initially data driven and therefore responds to each data item, odd Or normal. The goal-driven strategy evaluates all of its rules, independent Of the particular input data. In other words, the event-driven and goal- driven strategies will produce all conclusions derivable for a given set of input data. But the model-driven strategy will produce only conclusions compatible with its initial diagnoses and wil produce results derivable from the models for those diseases. 3.3. Naturalness to the Expert Accuracy can also be evaluated subjectively in terms of the naturalness of the knowledge represented and the output. A knowledge representation that seems natural to the expert facilitates knowledge acquisition from the expert and comprehension of the system by the expert. A natural output includes the order in which interpretations are generated: They should be produced in an order similar to that produced by the expert. 3.3.1. output The order of the output of interpretations in all three PUFF systems is determined by the order in which those interpretations are concluded. We compared these orders with the interpretations from the original physician’s reports. For each case, the number of items out of order was calculated by counting the minimum number of moves required to reproduce the doctor’s order. output generated by the model-driven system always had fewer findings out of order. The event-driven system had the next fewest out-of-order items, and the goal-driven system had the most. This result was not unexpected, since the model used by AGEPUFF/MODEL is based on the doctor’s order of reasoning from pulmonary test measurements to pulmonary disease diagnoses. The backchained system produced the most items out of order because the order of execution of the rules is determined by the chaining of attributes. A rule is executed because another higher goal rule needs the value of one of its concluding attributes, but all other conclusions of that rule will be made at the same time, even though those conclusions are not required until later in the processing. 3-3-2. Knowledge Representation The domain knowledge used by each of the three control strategies is basically the same. All three use production rules to rePresent the domain knowledge. They differ in how those rules are Table 3- 1: Measured Comparison of Three Controi Strategies Strategy rules tested rules executed input data internal data event-driven 60.4 12.6 80.4 54.4 model -driven 35.5 14.5 47 48.5 goal -driven 68 14 76 128.8 organized. In the event-driven approach, the rules are divided into sets called knowledge sources. A knowledge source is invoked when its precondition matches the event type of the focused event. The model-driven system also has sets of production rules, but some of those rules create expectations instead of generating events. The expectations (in this case, a set of ranges for data values to support the focused initial diagnosis) are matched against input data and the partial solution stored on the blackboard, before further KSs are invoked. If an expectation becomes true, it becomes the new focus. In the backward-chained system, the rules are not stored in any particular order or grouping. The order of evaluation and the focus are determined at run time by the backward chain from the goal rule. 3.4. Advantages and Disadvantages of Each Strategy for implementing PUFF The event-driven control strategy is robust; it considers all of the inpuf data, and tries to follow through on every event generated. Because of this thoroughness, it worked correctly even in the presence of unusual and incomplete data. However, the event- driven approach can bc nonconvergent. Event-driven systems can only produce conclusions derivable directly or indirectly from the input data. They cannot focus on or direct the search toward a desired solution. The expectation-driven model is usually more efficient than either of the other two examples. It can focus on important factors and come immediately to the correct conclusions for most cases. However, odd or conflicting data, that do not fit the model will cause problems. Thus, a disadvantage of the model-driven paradigm is that the correct solution depends heavily on the correct model and initial focus. If the system begins by focusing on an incorrect diagnosis, it will check only the data that are relevant to ihat wrong diagnosis. Data indicative of another diagnosis would be ignored. The importance of the correct model was demonstrated during the implementation of AGEPUFFIMODEL. The first model implemented worked correctly for extreme indications of OAD and RLD but expected everything in between to be normal. The normal expectations were correctly unconfirmed, but the system could not go back and correctly diagnose mild or moderate OAD or RLD. The backchain example has several advantages. First of all, it is simple for the user to understand and it lends itself to easy explanation of its reasoning. At each step of the execution, the next step is predetermined. The rules will always be evaluated in the same order, regardless of the particular input data. However, this does make the backchaining approach inefficient. Many of the rules in the backward chain may be totally irrelevant for a particular set of input data values, especially normal, healthy values. It cannot focus on particular data items in different sets of input data because it has no mechanism for deciding what is important and what is not. The goal of diagnosing pulmonary diseases is the same for every case, for every set of test data. 4. Conclusion All three approaches use a divic!e-and-conquer approach to solving the PUFF problem, but they differ in how the domain knowledge is divided. In the event-driven system, knowledge is divided by its association with the input data. A particular value of an input data item can cause a rule to conclude the presence or severity of a disease. That disease then becomes the focus for the next set of rules (KS) evaluated. Or several input data items can together cause a rule to conclude a new value for some item on the blackboard that then becomes the new focus. The focus of the system stays with a particular input data item, and all conclusions based on it, until there are no more KSs to draw further conclusions. The expectation-driven system organizes its knowledge around its disease models, the expected symptoms associated with each disease. The focus of the system varies with the initially hypothesized diagnosis. This focus is much stronger than that of the event-driven system. In most cases, the system considers only one or two diseases, making all possible conclusions about one disease before considering the alternatives. In the event-driven approach, each individual input data item could trigger a short focus on a particular diagnosis, with support building with each additional focus on the same diagnosis. The goal-driven strategy is focused by the backward chain connecting the production rules. It looks at data relevant to the goal rule’s conditions and thus focuses on each disease, present or not. In the event-driven and goal-driven approaches, all of the input data have the same importance. In the model-driven approach, some data items are more important than others. Input data used by the model to generate initial diagnoses are always referenced; data items referring to diagnoses not in the initial hypothesis may be ignored. The characteristics of the PUFF problem that make it a good application for an expectation-driven control strategy include: 1. A large amount of input data. 2. A small solution space (three possible diseases plus a small number of subtypes). 3. A simple model for initial hypotheses. Applications with similar characteristics are signal processing and other diagnostic problems. Other systems written in AGE using the expectation-driven control strategy include AGE-WI, a ventilator management system (Fagan, 1980), and GEO, a geological data interpretation program. Each new application is likely to have slightly different characteristics from the last, so it is not reasonable to expect one control strategy to satisfy everyone’s needs. 5. Acknowledgments This research was supported by the Advanced Research Projects Agency under Contract No. MDA 903-77-C-0322 and the National Institutes of Health Grant No. RR-00758. 6. References 1. Aikins, J. S. Prototvpes and Production Rules: A Knowledqe Representation Q Computer Consultations. Doctoral dissertation, Heuristic Programming Project, Dept. of Computer Science, Stanford University, 1980. 2. Erman, L. D., and Lesser, V. R. “A multi-level organization for problem- solving using many diverse cooperating sources of knowledge.” In Proc. IJCAI-75, pp. 483-490. 3. Fagan, L. M. VM: Representinq Time-Deoendent Relations in a Medical Settmq. Doctoral dissertation, Heuristic Programming Project, Dept. of Computer Science, Stanford University, 1980. 4. Kunz, J., Fallet, R., McClung, D., Osborn, J., Votteri, B., Nii, H. P., Aikins, J. S., Fagan, L., and Feigenbaum, E. “A Physiological Rule Based System for Interpreting Pulmonary Function Test Results.” HPP-78.19 (Working Paper), Heuristic Programming Project, Dept. of Computer Science, Stanford University, December 1978. 5. Nii, H. P., and Aiello, N. “AGE: A knowledge-based program for building knowledge-based programs”. In Proc of IJCAI-79, Tokyo, Japan, August, 1979, pp. 645655. 6. Van Melle, W. EMYCIN. A Domain-independent Production-rule System for Consultation Programs, Doctoral dissertation, Heuristic Programming Project, Dept. of Computer Science, Stanford University, 1980.
1983
1