index
int64
0
18.8k
text
stringlengths
0
826k
year
stringclasses
38 values
No
stringlengths
1
4
0
QUESTION ORDERING IN MIXED INITIATIVE PROGRAM SPECIFICATION DIALOGUE Louis Steinberg Department of Computer Science Rutgers University New Brunswick, N. J. 08903 ABSTRACT It would be nice if a computer system could accept a program specification in the form of a mixed initiative dialogue. One capability such a system must have is the ability to ask questions in a coherent order. We will see a number of reasons it is better if such a system produces all the questions it can and has a "dialogue moderator" choose which to ask next, than if the system asks the first question it thinks of. DM [9I, the dialogue moderator of PSI C51, chooses questions by searching a network model of a program, under control of a set of heuristic rules. This technique is simple and flexible. I. Introduction When you need a computer program, it is usually easier to tell a human being what the program should do than to specify the program directly to a computer system (eg a compiler). There are a number of reasons for this, including the knowledge and reasoning ability that a human has. We will concentrate here, however, on another advantage of communicating with humans, their ability to engage in a mixed initiative dialogue, and on one particular capability required for carrying on such a dialogue,*the ability to ask questions in a coherent order. A mixed initiative dialogue is one in which either party may take initiative. From the perspective of the work reported here, to "take initiative" in a dialogue is to alter the structure of the dialogue. This definition is essentially equivalent to that of Bobrow, et al [II, who define taking initiative as establishing or --m---w- ' This work was supported by the Defense Advanced Research Projects Agency at the Department of Defense under contract MDA 903-76-C-0206. The author was also partially supported by an IBM Graduate Fellowship. The work reported here would not have been possible without the environment provided by Cordell Green and the other members of the PSI project. I would also like to thank N. S. Sridharan for his comments on an earlier draft of this paper. violating expectations about what will come next, since it is precisely the structure of a dialogue which gives rise to such expectations. In particular, we will be concerned here with "topic structure", the order and relationships of the topics covered in the dialogue, and with "topic initiative", the ability to affect topic structure. The work described here 191 been done in the context of the PSI program synthesis system [5]. PSI acquires program specifications via mixed initiative, natural language dialogue. II. The General Scheme In order to ask questions, such a system must be able to do two things: it has to decide what aspects of the specification are as yet incomplete, and it has to decide which one of these aspects to ask about next. We will refer to the latter problem, deciding which question to ask next, as the task of "question ordering". A. Order from the Reasoning Process One common way to handle question ordering might be summarized as asking the first question the system thinks of. In this scheme, the system goes through its normal reasoning process, and at some point comes across a fact which it wants to know, but cannot deduce. Whenever this happens, the system stops and asks the user. (See, for example, [11 and 143). Note that the system stops whenever it finds any question to ask. Thus, the system asks each question as it comes up, and the order is determined by the reasoning process. If a system's reasoning process seems natural to the user, then this scheme produces a question order which seems natural, at least to a first approximation. However, there are some problems. The basic problem is that this scheme ties the topic structure of the dialogue to the reasoning procedures of the system. This makes topic structure harder to change, since any change in topic structure requires a change in the reasoning procedure. It can also make it hard to transfer the question ordering methods to another system that uses a different reasoning method. Finally, this method of question ordering assumes that there is a single, sequential reasoning process, and is 61 From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved. not possible in a system structure such as that of HEARSAY-II [71. B. Order from a Dialog Moderator A better scheme is to have the reasoning process produce as many questions as it can, and to use some other mechanism to select a single one of them to ask next. This scheme largely avoids the problems of the previous one. Its main drawback is that it requires a reasoning process which is able to produce more than one question at a time. An additional advantage of this scheme is that it allows us to implement question ordering in a separate module, with a clearly defined interface to the rest of the system. I have termed such a module a "dialogue moderatortt. Thus, the dialogue moderator is given a list of all the questions currently open, and must choose which one is to be asked next, so as to keep the dialogue well structured. Much recent research (eg [2], C61, [8]> has shown that structure of a dialogue is closely tied to the structure of goals and plans being pursued by the dialogue's participants. One might therefore imagine that the dialogue moderator needs a complete model of goals and plans, both those of the system and those of the user. However, in a program specification dialogue, the goals and plans of both participants are tied very closely to the structure of the program. As will be seen, it has been possible in PSI to use a simple model of the program structure instead of a complex model of goals and plans. (It might be argued that any system which handles natural language will eventually need the full model of goals and plans anyway, so using a simpler model here is no savings in the long run. It should be noted, however, that mixed initiative does not necessarily imply natural language. A useful system might be constructed which handles mixed initiative dialogue in some formal language.) III. J& Specific Method DM is the dialogue moderator of the PSI system. As noted above, DM maintains a simplified model of the program being specified. The program is viewed as a structured set of objects. Each object is either a piece of algorithm or a piece of data structure - the pieces of algorithm correspond roughly to the executable statements of a program, and the pieces of data structure correspond roughly to the variable declarations. A specific loop or a specific input operation might be algorithmic objects, while a set or a 5-tuple might be data structure objects. These objects are structured by two relationships: an object may be a subpart of another (eg an input operation might be a step of a loop, and thus one of its subparts), and an algorithm object may use a data structure object (eg an input operation "usestt the data structure it inputs). DM represents this structure in a standard network form; nodes represent the objects, and arcs represent the relations subpart/superpart and uses/used-by. Each node also has associated with it a list of questions about the object it represents. (A question asks about some attribute of some specific object. The objects, relations, and questions come from other modules of PSI.) In order to choose the next question to ask, DM searches the net, starting at the "present topic". The present topic is the object currently being discussed. Determining which object this is is a difficult and important problem in its own right, involving the syntax of the user's sentences as well as the status of the program specification, and has not been seriously dealt with in this work. Instead, some simple heuristics are used, the main one being to assume that most of the time the user will be talking about the object that the system just asked about. Once the present topic has been chosen, the search proceeds, under control of a set of rules. (The rules are listed in the appendix. See [9] for a discussion of the specific rules.) Each time the search reaches an object, a list of rules is chosen (depending on whether the object is a piece of algorithm or data structure) and these rules are applied in order. Some say to look for a specific kind of question about the current object. Others say to move along some particular kind of arc from the current object, and recursively apply the rules on the object we reach. If no question is found by this recursive application, we come back and continue applying the rules here. If at any point a rule that looks for questions finds one, that question is the one to ask, and the search stops. This scheme of moving through the net and looking for questions, under control of a set of rules, has proven to be simple and flexible. A related technique was used in SCHOLAR [3]. SCHOLAR is a CA1 system which teaches geography by engaging in a mixed initiative dialogue with the student. Both participants may ask and answer questions. SCHOLAR chooses which question to ask by a random (rather than rule directed) walk on a net which encodes its knowledge about geography. As ultimately envisioned, SCHOLAR would teach in a Socratic manner, that is, by asking a carefully designed sequence of questions. However, the structure of goals and plans in such a dialogue is probably very different from the structure of the net as discussed in [31. Because of this, a scheme of moving through this net is unlikely to be useful for producing such a sequence of questions. DM's question ordering behavior has been tested in two ways. First, a log of runs of PSI was surveyed. This log included 42 dialogues which were essentially complete. Each dialogue was checked, both to see if the user complained about the question ordering (there is a comment feature that can be used for such complaints), and also to see if the question order was subjectively acceptable. Except for one instance, later traced to a program bug, DM's behavior was correct. This test was too subjective, however, so a simulated dialogue was recorded, with myself playing the role of PSI and a programmer from outside the PSI group as the user. The inputs DM would have gotten during this dialogue were hand coded and given to DM, and the questions DM chose were compared with those I had chosen. DM had to choose a question at sixteen points, with two to seven questions to choose from. The correct question was chosen at thirteen of these points. An analysis of the errors indicates that they could be removed by some straightforward extensions of the current methodology, particularly by maintaining more history of how the dialogue got to the present topic. IV. Conclusions Thus we see that it is advantageous for a system which engages in mixed initiative dialogue to have the reasoning modules produce all the questions they can at each point in the dialogue, and to have a separate dialogue moderator choose which one to ask next. In such a system, the question ordering mechanism is decoupled from the reasoning process, so that either can be modified without changing the other. A given mechanism for selecting one of the proposed questions can be more easily transferred to a system with very different reasoning mechanism. Also, multiple parallel reasoning processes can be used with this scheme. DM, the dialogue moderator of PSI, represents the program as a simple net of objects and relations. It chooses a question by starting at the node representing the present topic of the dialogue, and searching the net, under control of a set of rules. It is possible to use a simple model of the program, rather than a complex model of goals and plans, because in the program specification task, the participants' goals and plans are so closely tied to the program structure. This general scheme of rule based search is advantageous because it is simple and flexible. These techniques are probably applicable to other settings where the structure of goals and plans can be tied to some simple task related structure. APPENDIX: Question Choice Rules - _I_- (These are slightly simplified versions of the content of the rules. The actual rules consist of LISP code.) Rules for Algorithms Al) Are there questions about the NAME of this object? A2) Look at all objects that are USED-BY this object. A3) Are there questions about this object other than EXIT-TEST, PROMPT, or FORMAT? A4) Are there questions about the PROMPT or FORMAT for this object? A5) Look at all objects that are SUB-PARTS of this object. A61 Are there questions about the EXIT-TEST of this object? A7) Look at all objects that are SUPER-PARTS of this object. Rules for Data Structures Dl) Look at all objects that are SUB-PARTS of this object. D2) Are there questions about the STRUCTURE of this object? D3) Are there OTHER questions about this object? D4) Look at all objects that are SUPER-PARTS of this object. D5) Look at all objects that USE this object. REFERENCES Cl] Bobrow, D., Kaplan, R., Kay, M., Norman, D., Thompson, H., Winograd, T., "GUS, A Frame- Driven Dialogue System." Artificial Intelligence 8 (1977) 155-173. [2] Brown, G., "A Framework for Processing Dialogue", Technical Report 182, MIT Laboratory for Computer Science, June '977. [3] Carbonell, J. R., "AI in CAI: An Artificial Intelligence Approach t.0 Computer-Aided Instruction." IEEE Trans. Man-Machine Svst. J-l- (1970) 190-202. [4] Davis, R., Buchanan, B., Shortliffe, E., Production Rules as a Representation for a Knowledge-Based Consultation Program, Memo AIM- 266, Stanford Artificial Intelligence Laboratory, October, 1975. [5] Green, C., A Summary of the PSI Program Synthesis System, Proceedings of &h& Fifth International Joint Artificial IntelliKZ, Conference on Cambridge Massachusetts, August 1977, 380 - 381. [63 Grosz, B., The Representation and Use of Focus in a System for Understanding Dialogues, Proceedings of &t& Fifth d International Joint Conference on Artificial Intelligence, Cambridge, Massachusetts, August 1977, 67 - 76. [7] Lesser, V., Erman, L., A Retrospective View of the HEARSAY-II Architecture, Proceedin= of the Fifth International Joint Conference on Artificial Intelligence, Cambridge, Massachusetts, August 1977, 380 - 381. 183 Mann, W., Man-Machine Communication Research: Final ReDort, ISI/RR-77-57, USC Information Sciences Institute, February, 1977. [9] Steinberg, L., A Dialogue Moderator for Program Specification Dialogues in t,he psi Svstem, PhD dissertation, Stanford University, in progress. 63
1980
1
1
AUTOMATIC GENERATION OF SEMANTIC ATTACHMENTS IN FOL Luigia Aiello Computer Science Department Stanford University Stanford, California 94305 ABSTRACT Semantic attachment is provided by FOL as a means for associating model values (i.e. LISP code) to symbols of a first order language. This paper presents an algorithm that automatically generates semantic attachments in FOL and discusses the ad- vantages deriving from its use. I INTRODUCTION In FOL (the mechanized reasoning system developed by R. Weyhrauch at the Stanford A.I. Laboratory [4,5,6 1, the knowledge about a given domain of discourse is represented in the form of an L/S structure. F An L/S structure is the FOL counterpart of the logician notion of a theory/model pair. It is a triple <L,S,F> where L is a sorted first order language with equality, S is a simulation structure (i.e. a computable part of a model for a first order theory), and F is a finite set of facts (i.e. axioms and theorems). Semantic attachment is one of the characteriz- ing features of FOL. It allows for the construction of a simulation structure S by attaching a "model value" (i.e. a LISP data structure) to (some of) the constant, function and predicate symbols of a first order language. Note that the intended semantics of a given theory can be specified only partially, i.e. not necessarily all the symbols of the language need to be given an attachment. The FOL evaluator, when evaluating a term (or wff), uses both the semantic and the syntactic information provided within an L/S structure. It uses the semantic attachments by directly invoking the LISP evaluator for computing the value of ground sub-terms of the term (wff). It uses a simplification set, i.e. a user-defined set of rewrite rules to do symbolic evaluations on the term (wff). Semantic information and syntactic informa- tion are repeatedly used - in this order - until no further simplification is possible. The research reported here has been carried out while the author was visiting with the Computer Science Department of Stanford University on leave from IEI of CNR, Pisa, Italy. Author's permanent address: IEI-CNR, via S. Maria 46, I-56100 Pisa, Italy Semantic attachment has been vital in the generation of many FOL proofs, by significantly increasing the efficiency of evaluations. The idea of speeding up a theorem prover by directly invoking the evaluator of the underlying system to compute some functions (predicates) has been used in other proof generating systems. FOL is different from other systems in that it provides the user with the capability of explicitly telling FOL which semantic information he wants to state and use about a given theory. This approach has many advantages, mostly epistemological, that are too long to be discussed here. II AUTOMATIC GENERATION OF SEMANTIC ATTACHMENTS It is common experience among the FOL users that they tend to build L/S structures providing much more syntactic information (by specifying axioms and deriving theorems) than semantic inform- tion (by attaching LISP code to symbols). In recent applications of FOL, L/S structures are big, and (since the information is essentially syntactic)the dimension of the simplification sets is rather large. The unpleasant consequence is that the evaluations tend to be very slow, if feasible at all. This has prompted us to devise and implement an extension of the FOL system, namely, a compiling algorithm from FOL into LISP, which allows for a direct evaluation in LISP of functions and pre- dicates defined in First Order Logic. The compila- tion of systems of function (predicate) definitions from FOL into LISP allows FOL to transform syntac- tic information into semantic information. In other words, the compiling algorithm allows FOL to automatically build parts of a model for a theory, starting from a syntactic description. Semantic attachment has often been criticised as error prone. In fact, the possibility of directly attaching LISP code to symbols of the language allows the FOL user to set up the se- mantic part of an L/S structure in a language different from that of first order logic. This forbids him to use FOL itself to check the relative consistency of the syntactic and semantic part of an L/S structure. From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved. The automatic transformation of FOL axioms (or, in general, facts) into semantic attachments, besides the above mentioned advantage of substanti- ally increasing the efficiency of the evaluator, has the advantage of guaranteeing the consistency between the syntactic and semantic specifications of an FOL domain of discourse, or at least, to keep to a minimum the user's freedom of introduc- ing non-detectable inconsistencies. The semantic attachment for a function (predi- cate)symbol can be automatically generated through a compilation if such a symbol appears in the syntactic part of an L/S structure as a definiendum in a system of (possibly mutually recursive) definitions of the following form: Yx 1 . ..x r.fixl,... X r> =6 i@i,Iyi,q⌧19 . l . +I V Yl’ l - Ys 4Pj (Yl ’“. Y,) = Zj@jr~j,~j,Yl,...,Ysl) Here the f-s are function symbols and the P-s are predicate symbols. The mare terms in4-s, Y-s, F-s and x-s; the t-s are wffs in the$-s, q-s, ‘E-s and y-s. By ? we denote a tuple of constant sym- bols. By 9 (resp.2) we denote a tuple of function (resp. predicate) symbols. @(resp.2) may contain some of f (resp. P), but it is not necessarily limited to them, i.e. other function and predicate symbols besides the definienda can appear in each definiens. The compilation algorithm, when provided with a system of definitions, 'first performs a well- formedness check, then a compilability check. The well-formedness check tests whether or not all the facts to be compiled are definitions, i.e. if they have one of the two following forms (note that here we use the word "definition" in a broader sense than logicians do): fjxl...xr.fi(xl,...,xr) = . . . if Yl l **Ys.(Pj(yl'...,ys)= . . . The compilability check- consists in verifying that a> each definition is a closed wff, i.e. no free variable occurs in it; b) all the individual constants and the function (predicate) symbols appearing in the definiens either are one of the definienda or are attached to a model value (the first case allows for recursion or mutual re- cursion); c) the definiens can contain logical constants, conditionals and logical connectives but no quantifiers. When the FOL evaluator invokes the LISP evaluator, it expects a model value to be returned; it does not know how to handle errors occurring at the LISP level. This, for various reasons too long to be reported here, justifies the three above restrictions. Actually, the second and the third restrictions can be weakened with and appropriate extension of the FOL evaluator and of the compiler (respectively) to cope with the new situation. More details are presented in [l]. To present a simple example of compilation, consider the following facts: yy x.f(x,y) = if P(x) then g(x,x) else f(y,x) - vy x.g(x,y) = x+y If we tell FOL to compile them in an L/S structure where a semantic attachment exists both for the symbol P and for the symbol + (let them be two LISP functions named C-P and PLUS, respectively), it produces the following LISP code: (DE c-f (X y> (COND ((C-P x) (C-g x x)) (T (C-f y xl>>> (DE C-g (x y) (PLUS x Y> ) and attaches it to the function symbols f and g, respectively. III SOUNDNESS OF THE COMPILATION The compiling algorithm is pretty straight- forward, hence, its correctness should not con- stitute a problem. Conversely, a legitimate question is the following: Is the compilation process sound? In other words: Who guarantees that running the FOL evaluator syntactically on a system of definitions gives the same result as running the LISP evaluator on their (compiled) semantic attachments? The answer is that the two evaluations are weakly equivalent, i.e. if both terminate, they produce the same result. This is because the FOL evaluator uses a leftmost outermost strategy of function invocation (which corresponds to call-by- name) while the mechanism used by the LISP evalua- tor is call-by-value. Hence, compiling a function can introduce some nonterminating computations that would not happen if the same function were eval- uated symbolically. This, however, does not constitute a serious problem and it will be overcome in the next version of FOL. In fact, it will be implemented in a purely applicative, call-by-need dialect of LISP (note that, call-by-need is strongly equivalent to call-by-name in purely applicative languages). IV CONCLUSION FOL is an experimental system and, as is often the case with such systems, it evolves through the experience of its designer and users. Particular attention is paid to extend FOL only with new features that either improve its proving power or allow for a more natural interaction between the user and the system (or both) in a uniform way. The addition of the compiling algorithm sketched in the previous sections is in this spirit. This extension of FOL has been very useful in recent applications (see, for instance [2]). Experience has shown that the largest part of the syntactic information in an L/S structure can be compiled. This suggests a further improvement to be done on FOL evaluations. The use of the compiling algorithm leads to L/S structures where (almost) all the function (predicate) symbols of the language have an attachment. Hence, the strategy of the FOL evaluator to use semantic information first (that was the most reasonable one when semantic attachments were very few and symbolic evaluations could be rather long) is in our opinion no longer the best one. In fact, sometimes, properties of functions (stated as axioms or theorems in the syntactic part of the L/S structure) can be used to avoid long computations before invoking the LISP evaluator to compute that function. Finally, a comment on related work. Recently (and independently), Boyer and Moore have added to their theorem prover the possibility of introducing meta-functions, proving them correct and using them to enhance the proving power of their system [3]. This is very much in the spirit of the use of META in FOL and of the compiling algorithm described here. ACKNOWLEDGMENTS The members of the Formal Reasoning Group of the Stanford A.I. Lab are acknowledged for useful discussions. Richard Weyhrauch deserves special thanks for interesting and stimulating conversa- tions about FOL. The financial support of both the Italian National Research Council and ARPA (through Grant No. MDA903-80-C-0102) are acknowledged. REFERENCES [1] Aiello, L., "Evaluating Functions Defined in First Order Logic." Proc of the Logic -* - - Programming Workshop, Debrecen, Hungary, 1980. [2] Aiello, L., and Weyhrauch, R. W., "Using Meta- theoretic Reasoning to do Algebra." Proc. of -- the 5th Automated Deduction Conf., Les Arcs, France, 1980. [3] Boyer, R.S., and Moore, J.S., "Metafunctions: Proving them correct and using them efficiently as new proof procedures." C. S. Lab, SRI International, Menlo Park, California, 1979. [4] Weyhrauch, R.W., "FOL: A Proof Checker for First-order Logic." Stanford A.I. Lab, Memo AIM-235.1, 1977. [5] Weyhrauch, R. W., "The Uses of Logic in Artificial Intelligence." Lecture Notes of the Summer School on the Foundations of Artificial Intelligence and Computer Science (FAICS '78), Pisa, Italy, 1978. [6] Weyhrauch, R.W., "Prolegomena to a Mechanized Theory of Formal Reasoning." Stanford A.I. Lab, Memo AIM-315, 1979; Artificial Intelligence Journal, to appear, 1980. 92
1980
10
2
ABSTRACT HCPRVR: AN INTERPRETER FOR LOGIC PROGRAMS Daniel Chester Department of Computer Sciences University of Texas at Austin An overview of a logic program interpreter written in Lisp is presented. The interpreter is a Horn clause-based theorem prover augmented by Lisp functions attached to some predicate names. Its application to natural language processing is discussed. The theory of operation is explained, including the high level organization of the PROVE function and an efficient version of unification. The paper concludes with comments on the overall efficiency of the interpreter. An axiom is either an atomic formula, which can be referred to as a fact, or an expression of the form ( <conclusion> < <premissl> .*. <premissN> ) where both the conclusion and the premisses are atomic formulas. The symbol "<" is intended to be a left-pointing arrow. An atomic formula is an arbitrary Lisp ___ - expression beginning with a Lisp atom. That atom is referred to as a relation or predicate name. Some of the other atoms in the expression may be designated as variables by a flag on their property lists. I INTRODUCTION III CALLING LOGIC PROGRAMS - -- HCPRVR, a Horn Clause theorem PRoVeR, is a Lisp program that -interprets a --- simple logical formalism as a programming language. It has been used for over a year now at the University of Texas at Austin to write natural language processing systems. Like Kowalski [II, we find that programming in logic is an efficient way to write programs that are easy to comprehend. Although we now have an interpreter/compiler for the logic programming language Prolog [2], we continue to use HCPRVR because it allows us to remain in a Lisp environment where there is greater flexibility and a more familiar notation. This paper outlines how HCPRVR works to provide logic programming in a Lisp environment. The syntax of logic programs is given, followed by a description of how such programs are invoked. Then attachment of Lisp functions to predicates is explained. Our approach to processing natural language in logic programs is outlined briefly. The operation of HCPRVR is presented by giving details of the PROVE and MATCH functions. The paper closes with some remarks on efficiency. II LOGIC PROGRAM SYNTAX ~___-- ___ A logic program is an ordered list of axioms. -- -e--e * This work was supported by NSF Grant MCS 74-2491 -8. There are two ways to call a logic program in HCPRVR. One way is to apply the EXPR function TRY to an atomic formula. The other way is to apply the FEXPR function ? to a list of one or more atomic formulas, i.e., by evaluating an expression of the form ( ? <formulal> ..- <formulaN> > In either case the PROVE function is called to try to find values for the variables in the formulas that makes them into theorems implied by the axioms. If it finds a set of values, it displays the formulas to the interactive user and asks him whether another set of values should be sought. When told not to seek further, it terminates after assigning the formulas, with the variables replaced by their values, to the Lisp atom VAL. IV PREDICATE NAMES AS FUNCTIONS ~- Occasionally it is useful to let a predicate name be a Lisp function that gets called instead of letting HCPRVR prove the formula in the usual way. The predicate name NEQ*, for example, tests its two arguments for inequality by means of a Lisp function because it would be impractical to have axioms of the form (NEQ* X Y) for every pair of 93 From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved. constants X and Y such that X does not equal Y. Predicate names that are also functions are FEXPRs and expect that their arguments have been expanded into lists in which all bound variables have been replaced by their values. These predicate names must be marked as functions by having the Lisp property FN set to T, e.g., executing (PUT '<predicate name> 'FN T), so thi: HCPRVR will interpret them as functions. By letting syntactic categories be predicates with three arguments, we can make axioms that pull phrases off of a list of words until we get a sentence that consumes the whole list. In addition, arbitrary tests can be performed on the phrase representations to check whether they can be semantically combined. Usually the phrase representation in the conclusion part of an axiom tells how the component representations are combined, while the premisses tell how the phrase should be factored into the component phrases, what their' representations should be, and what restrictions they have. Thus, the axiom ((S X (U ACTOR V . W) Z) < (NP X V Y) (VP Y (U . W) Z) (NUMBER v ~1) (NUMBER U N2) (EQ N1 N2)) says that an initial segment of word list X is a sentence if first there is a noun phrase ending where word list Y begins, followed by a verb phrase ending where word list Z begins, and both phrases agree in number (singular or plural). Furthermore, the noun phrase representation V is made the actor of the verb U in the verb phrase, and the rest of the verb phrase representation, W, is carried along in the representation for the sentence. After suitable axioms have been stored, the sentence THE CAT IS ON THE MAT can be parsed by typing (? (S (THE CAT IS ON THE MAT) x NIL) The result of this computation is the theorem (S (THE CAT IS ON THE MAT) (IS ACTOR (CAT DET THE) LOC (ON LOC (MAT DET THE))) NIL) VI THEORY OF OPERATION -- A. General Organization HCPRVR works essentially by the problem reduction principle. Each atomic formula can be thought of as a problem. Those that appear as facts in the list of axioms represent problems that have been solved, while those that appear as conclusions can be reduced to the list of problems represented by the premisses. Starting from the formula to be proved, HCPRVR reduces each problem to lists of subproblems and then reduces each of the subproblems in turn until they have all been reduced to the previously solved problems, the "facts" on the axiom list. The key functions in HCPRVR that do all this are PROVE and MATCH. B. The PROVE Function -~ PROVE is the function that controls the problem reduction process. It has one argument, a stack of subproblem structures. Each subproblem structure has the following format: ( <list of subproblems>.<binding list> ) where the list of subproblems is a sublist of the premisses in some axiom and the CAR of the binding list is a list of variables occurring in the subproblems, paired with their assigned values. When PROVE is initially called by TRY, it begins with the stack ( ( ( <formula> ) NIL ) ) The algorithm of PROVE works in depth-first fashion, solving subproblems in the same left-to-right order as they occur in the axioms and applying the axioms as problem reduction rules in the same order as they are listed. PROVE begins by examining the first subproblem structure on its stack. If the list of subproblems in that structure is empty, PROVE either returns the binding list, if there are no other structures on the stack, i.e., if the original problem has been solved, or removes the first structure from the stack and examines the stack again. If the list of subproblems of the first subproblem structure is __ empty, not PROVE examines the first subproblem on the list. If the predicate name in it is a function, the function is applied to the arguments. If the function returns NIL, PROVE fails; otherwise the subproblem is removed from the list and PROVE begins all over again with the modified structure. 94 When the predicate name of the first subproblem in the list in the first subproblem stucture is not a function, PROVE gets all the axioms that are stored under that predicate name and assigns them to the local variable Y. At this point PROVE goes into a loop in which it tries to apply each axiom in turn until one is found that leads to a solution to the original problem. It does this by calling the MATCH function to compare the conclusion of an axiom with the first subproblem. If the match fails, it tries the next axiom. If the match succeeds, the first subproblem is removed from the first subproblem structure, then a new subproblem structure is put on the stack in front of that structure. This new subproblem structure consists of the list of premisses from the axiom and the binding list that was created at the time MATCH was called. Then PROVE calls itself with this newly formed stack. If this call returns a binding list, it is returned as the value of PROVE. If the call returns NIL, everything is restored to what it was before the axiom was applied and PROVE tries to apply the next axiom. The way that PROVE applies an axiom might be better understood by considering the following illustration. Suppose that the stack looks like this: ( ( (Cl C2).<blist> ) ..* ) The first subproblem in the first subproblem structure is Cl. Let the axiom to be applied be (C < Efl P2 P3) PROVE applies it by creating a new binding list blist', initially empty, and then matching C with Cl with the call (MATCH C <blist'> Cl <blist>). If this call is successful, the following stack is formed: ( ( (Pl P2 P3).<blist'> ) ( (C2).<blist> ) e.. ) Thus problem Cl has been reduced to problems Pl, P2 and P3 as modified by the binding list blist'. PROVE now applies PROVE to this stack in the hope that all the subproblems in it can be solved. In the event that the axiom to be applied is cc>, that is, the axiom is just a fact, the new stack that is formed is ( ( ().<blist'> ) ( (C2).<blist> ) . . . ) When PROVE is called with this stack, it removes the first subproblem stucture and begins working on problem C2. c. The MATCH Function -- The MATCH function is a version of the unification algorithm that has been modified so that renaming of variables and substitutions of variable values back into formulas are avoided. The key idea is that the identity of a variable is determined by both the variable name and the binding list on which its value will be stored. The value of a variable is also a pair: the term that will replace the variable and the binding list associated with the term. The binding list associated with the term is used to find the values of variables occurring in the term when needed. Notice that variables do not have to be renamed because MATCH is always called (initially) with two distinct binding lists, giving distinct identities to the variables in the two expressions to be matched, even if the same variable name occurs in both of them. MATCH assigns a value to a variable by CONSing it to the CAR of the variable's binding list using the RPLACA function; it also puts that binding list on the list bound to the Lisp variable SAVE in PROVE. This is done so-that the effects of MATCH can be undone when PROVE backtracks to recover from a failed application of an axiom. VII EFFICIENCY HCPRVR is surprisingly efficient for its simplicity. The compiled code fits in 2000 octal words of binary programming space and runs as fast t: the Prolog interpreter. Although the speed can further improved by more sophisticated programming, we have not done so because it is adequate for our present needs. A version of HCPRVR has been written in C; it occupies 4k words on a PDP11/60 and appears to run about half as fast as the compiled Lisp version does on a DEC KIlO. The most important kind of efficiency we have noticed, however, is program development efficiency, the ease with which logic programs can be written and debugged. We have found it easier to write natural language processing systems in logic than in any other formalism we have tried. Grammar rules can be easily written as axioms, with an unrestricted mixture of syntactic and non-syntactic computations. Furthermore, the same grammar rules can be used for parsing or generation of sentences with no change in the algorithm that applies them. Other forms of natural language processing are similarly easy to program in logic, including schema instantiation, question-answering and text summary. We have found HCPRVR very useful for gaining experience in writing logic programs. REFERENCES [l] Kowalski, R. A. "Algorithm = logic + control." CACM 22, 7, July, 1979, 424-436. [2] Warren, D. H., L. M. Pereira, and F. Pereira. "PROLOG - the language and its implementation compared with lisp." Proc. SymP' AI and Prog. Langs., SIGPLAN 12, -___ 8/sIGARTx4, August, 1977, 109-115. 95
1980
11
3
FIRST EXPERIMENTS WITH RUE AUTOMATED DEDUCTION Vincent J. Digricoli The Courant Institute and Hofstra University 251 Mercer Street, New York, N.Y. 10012 ABSTRACT RUE resolution represents a reformulation of binary resolution so that the basic rules of inference (RUE and NRF) incorporate the axioms of equality. An RUE theorem prover has been imple- mented and experimental results indicate that this method represents a significant advance in the handling of equality in resolution. A. Introduction In (1) the author presented the complete theory of Resolution by Unification and Equality which incorporates the axioms of equality into two inference rules which are sound and complete to prove E-unsatisfiability. Our purpose here is to present systematically the results of experiments with an RUE theorem prover. The experiments chosen were those of McCharen, Overbeek and Wos (2), ahd in particular we are interested in comparing the results achieved by these two theorem provers. In MOW, the equality axioms were used expli- citly for all theorems involving equality and apparently no use was made of paramodulation. In RUE, where proofs are much shorter, the inference rules themselves make implicit use of the equality axioms which do not appear in a refutation and also no use of paramodulation is made. Both systems are pure resolution-based systems. Before considering the experiments, we first review and summarize the theory of resolution by unification and equality as presented in (1). There we define the concept of a disagreement set, the inference rules RUE and NRF, the notion of viability, the RUE unifying substitution and an equality restriction which inhibits redundant inferences. Here we simply introduce the concept of a disagreement set and define the rules of inference. A disagreement set of a pair of terms (tl,t2) is defined in the following manner : is the on y disagreement set and if "If (tl,ts) are identical, the empty ;;st2) differ, the set of one pair { (tl,t2) is the origin disagreement set. i, Furt ermore, if tl has the form f(al,...,ak) and t2 the form f(bl,...,bk), then the set of pairs of corresponding arguments which are not iden- tical is the topmost disagreement set -0 In the simple example : tl = f( at g(WW) 1 t2 = f( a', g(b',h(c')) 1 besides the origin disagreement, there are the disagreement sets : Dl = i (ana'), ( g(b,h(c)) ,g(b',h(c')) 1) D2 = { (a,a'), (b,b'), (h(c) ,h(c') 1) D3 = ( tara'), (b,b'), (c,c') 1 This definition merely defines all possible ways of proving t =t , i.e. we can prove t =t by proving equali 4 12 y ?n every pair of any one disagreement set. An input clause set, for ex- ample, may imply equality in D D3' but not in D2 Or it may most directly phove tl=t2 by or proving equality in D3. We proceed to define a disagreement set of -- complementary literals : W1,...,sn) f;(t, v.,tn) as the union of disagreement sets : D = .U D i=l,n i where D i is a disagreement set of (si,ti). We see immediately that : P(sl"",sn) A ht1 I-O&~) * D where D now represents the disjunction of inequal- ities specified by a disagreement set of P,;, and furthermore, that : f(al~~~~,ak) # f(blto..,bk) --$ D where D is the disjunction of inequalities speci- fied by a disagreement set of f(al,...,ak), f(bl,-wbk) 0 For example, p(f(a,g(b,h(c)))) A F(f(a',g(b',h(c')))) 4 afa' A bfb' A cfc' . The reader is invited to read (1) which states the complete theory of RUE resolution with many examples. Our primary concern here is to discuss experiments with an RUE theorem prover and to begin to assess the effectiveness of this 96 inference system. we Experiments Our experiments deal with Boolean Algebra are asked to prove from the eight axioms : Al :x+0=x A2 :x*1=x A3 :x+Z=l A4 : x *xt=o A5 : x(y+z) = xy +x2 A6 : x + yz = (x+y) (x+2) A7 :x+y=y+x A0 :x*y=y*x (we are denoting logical or by +r logical and by * or juxtaposition, and negation by overbar), the following theorems : and a*0 # 0 I - x*x = 0 a*: # a*0 d = (a/x} - x(y+z) = xy + xz (7 ={a/x} y+z # ;; ay+az # a*0 t- x+0=x o- = E a/y,o/z *a/x j a*: + a*0 # a*0 t- 0+x=x 0- =la*O/x) Tl : Z=l T2 :x+1=1 T3 :x*0=0 T4 : x + xy = x T5 : x(x+y?" = x T6 :x+x=x T7 :x*x=x T8 : (x+y) +z = x+(y+z) T9 : (x*y)*z = x* (y*z) TlO : the complement of x is unique (x*a=O) (x+a=l) (x*b=O) (x+b=l) 4 a= b Tll :z=x -- T12 :x+y =x*y De Morgan's Law I -- T13 :x*y=x+;; De Morgan's Law II These theorems are stated in the order of increasing complexity of proof, with 6= 1 being trivially easy for a human to prove and De Morgan's Laws being very difficult for a human to deduce from the axioms. George and Garrett Birkhoff have a paper on the above proofs published in the Transactions of the American Mathematical Society (3) and Halmos comments on the significantly difficult character of the proofs in his Lectures on Boolean Algebras (4) 0 The following is a machine deduced, five step RUE refutation which proves x*0 = 0 : 0 # a*a - 0 = x*; D o-=<a/x] The above experiments together with many others (dealing with group theory, ring theory, geometry, Henken Models, set theory and program verification) were proposed as benchmarks by McCharen,Overbeek and Wos, who in (2) published the results of their own experiments. We here tabulate the comparative performance of the RUE and MOW theorem provers on the above thearems. The MOW theorem prover uses binary resolution with explicit use of the equality axioms and is implemented in Assembly language on the IBM System 370-Model 195. Great effort was made to enhance the efficiency of their theorem prover and this is described in (2). The RUE theorem prover, on the other hand, represents a first implementation in PLl on a CDC 6600 machine which is much slower than the Model 195. In the experiments each theorem is treated as an independent problem and cannot use earlier theorems as lemmas, so that for example in proving associativity (T8), we need to prove (T2,T3,T4,T5) as sub-theorems. The total number of unifications performed is suggested as the primary measure of comparison rather than time. The comparative results are given in Table 1. From Tl to T7, The RUE theorem prover was very successfull, but at T8 (associativity) results have yet to be obtained since refinements in the heuristic pruning procedure are required and are being developed with the expectation that more advanced results will be available at the conference. RUE represents one of several important methods for handling equality in resolution and it is important to emphasize that it is a complete method whose power is currently being tested in stand-alone fashion. However, it is not precluded that we can combine this method with other tech- niques such as demodulation,paramodulation and reduction theory to achieve a mutually enhanced effect. 97 TABLE 1. THEOREM TOTAL NUMBER OF UNIFICATIONS RUE : MOW Tl 'i=l 77 26,702 T2 x+1=1 688 46,137 T3 x*0=0 676 46,371 T4 x+xy=x 3,152 see below T5 X(X+Y) = x -3,113 tl I! T7 X*X=X 2,145 n m T6rrT7 4,326(l)105,839 T8 (x+y)+z = x+(y+z) IP 413,455 T9 (x*y)*z = x*(y*z) IP NPR TlO ';:;~;I ';;-;I+ a=b IP NPR ( -I( 1 e Tll z =x IP NPR T12 x+y = ;; * ;; IP NPR - - - T13 x*y=x+y IP NPR TIME LENGTH OF PROOF (SECONDS) RUE : MOW RUE : MOW 10.1 27.5 7 51.5 " " 12 102.9 57.0 24 41.6 see below 13 Note 1 : To prove the double theorem, T4,,T5, x+xy=x A x(x+y)=x, we add the negated theorem as a single clause, a+ab#a v a(a+b)#a , to the input clause set. It is evident that the erasure of these two literals in a refutation decomposes into two independent subproblems since no variables appear in the clause. Hence, the refutations for a+ab#a and a(a+b)#a obtained in separate experi- ments T4,T5 can be concatenated and the results of these experiments simply summed which is what we have done to state the RUE results for the double theorem. The same holds true for T6hT7. * The estimated length of MOW proofs with the equality axioms is twice as long as corresponding RUE proofs. REFERENCES : 1. "Automatic Deduction and Equality" by Vincent J. Digricoli, Proceedings of the Oct., 1979, Annual Conference of the ACM, 240-250. 2. "Problems and Experiments for and with Automat- ed Theorem-Proving Programs", by McCharen, Overbeek and Wos, IEEE Transactions on Comput- ers, Vol C-25,No.8,August 1976. 3. "Distributive Postulates for Systems Like Boolean Algebras", Transactions of the American Mathematical Society, Volume 60, July-Dec. 1946. 4. "Lectures on Boolean Algebras", by Paul R. Halmos, Section 2, pages 3-5, D.Van Nostrand. The completion of these Boolean experiments together with other experiments with a more fully delineated comparative analysis with systems other than MOW represents work the author will report upon in the future. 98
1980
12
4
WHAT’S WRONG WITH NON-MONOTONIC LOGIC? David J . Israel Bolt Beranek and Newman Inc. 50 Moulton St. Cambridge, Mass. 02238 ABSTRACT In this paper ’ I ask, and attempt to answer, the following question : What’s Wrong with Non-Monotonic Logic? The answer, briefly’ is that the motivation behind the wonderfully impressive work involved in its development is based on a confusion of proof-theoretic with epistemological issues. ------------ What’s wrong with non-monotonic logic (and fo! that matter, with the logic of default reasoning)? The first question we should ask is: What’s supposed to be wrong with “standard”, monotonic logic? In recent - and extremely impressive - work, Doyle and McDermott [ 1 I, McDermott C21, and Reiter C31 have argued that classical logic - in virtue of its monotoniqity - is incapable of adequately capturing or representing certain crucial features of real live reasoning and inference. In particular’ they note that our knowledge is always incomplete, and is almost always known to be so ; that, in pursuing our goals - both practical and theoretical - we are forced to make assumptions or to draw conclusions on the basis of incomplete evidence ; conclusions and assumptions which we may have to withdraw in the light of either new evidence or further cogitation on what we already believe. An essential point here is that new evidence or new inference may lead us to reject previously held beliefs, especially those that we knew to be inadequately supported or merely presumptively assumed. In sum, our theories of the world are revisable; and thus our attitudes towards at least some our beliefs must likewise be revisable. Now what has all this to do with logic and its monotonicity? Both Reiter and Doyle-McDermott characterize the monotonicity of standard logic in syntactic or proof-theoretic terms. If A and B are two theories, and A is a subset of B, then the ‘The research reported in this paper was supported in part by the Advanced Research Projects Agency, and was monitored by ONR under Contract No. NOOO14-77-C-0378. To remedy this lack, Doyle and McDermott introduce into an otherwise standard first order language a modal operator “M” which, they say, is to be read as “It is consistent with everything that is believed that.. .” (Reiter’s “M”, which is not a symbol of the object language, is also supposed to be read “It is consistent to assume that..“. I think there is some unclarity on Reiter’s part about his “M”. He speaks of it in ways conducive to interpreting it as a metalinguistic predicate on sentences of the object language ; and hence not as an operator at all, either object-language or metalanguage. So his default rules are expressed in a language whose object-language contains sentences of the form l’Mp” , i .e . , in a language which, relative to the original first-order object language, is a meta-meta-language .) Now in fact this reading isn’t quite right. ** The suggested reading doesn’ t capture the notion Doyle-McDermott and Reiter seem to have in mind. What they have in mind is, to put it non-linguistically (and hence, of course, non-syntactically) : that property that a belief has just in case it is both compatible with everything a given subject believes at a given time and remains so when the subject’s belief set undergoes certain kinds of changes under the pressure of both new information and further thought, and where those changes are the result of rational epistemic -----_ policies. 99 From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved. I’ve Put the notion in this very epistemologically oriented way precisely to hone in on what I take to be the basic misconception underlying the work on non-monotonic logic and the logic of default reasoning. The researchers in question seem to believe that logic - deductive logic , for there is no other kind - is centrally and crucially involved in the fixation and revision of belief. Or to put it more poignantly, they mistake so-called deductive rules of inference for real, honest-to-goodness rules of inference. Real rules of inference are precisely rules of belief fix ation and revision ; deductive rules of transformation are precisely not. Consider that old favorite : modus ( ponendo) ponens. It is not a rule that should be understood as enjoining us as follows : whenever you believe that p and believe that if p then q, then believe that q. This, after all, is one lousy policy. What if you have overwhelmingly good reasons for rejecting the belief that q? All logic tells you is that you had best reconsider your belief that p and/or your belief that if p then q (or, to be fair, your previously settled beliefs on the basis of which you were convinced that not-q); it is perforce silent on how to revise your set of beliefs so as to . . to what? Surely, to come up with a good theory that fits the evidence, is coherent, simple, of general applicability, reliable, fruitful of further testable hypotheses, etc. Nor is it the case that if one is justified in believing that p and justified in believing that if p then q (or even justified in believing that p entails q) , is one justified in be1 iev ing (inferring) that c~. Unless, of course, one has no other relevant ---- beliefs. Butone always does. ---- The rule of modus ponens is, first and foremost, a rule that permits one to perform certain kinds of syntactical transformations on (sets of) formally characterized syntactic entities. (Actually, first and foremost, it is not really a rule at all; it is f’reallyff just a two-place relation between on the one hand an ordered pair of wffs., and on the other, a wff .> It is an important fact about it that, relative to any one of a family of interpretations of the conditional, the rule is provably sound, that is ** Nor is it quite clear. By “consistentff are we to mean syntactically consistent in the standard monotonic sense of syntactic derivability or in the to-be-explicated non-monotonic sense? Or is it semantic consistency of one brand or another that is in question? This unclarity is fairly quickly remedied . We are to understand by If consistencyff standard syntactic consistency, which in standard systems can be understood either as follows: A theory is syntactically consistent iff there is no formula p of its language such that both p and its negation are theorems, or as follows : iff there is at least one sentence of its language which is not a theorem. There are otherwise standard, that is, monotonic, systems for which the equivalence of these two notions does not hold; and note that the first applies only to a theory whose language includes a negation operator. truth (in an interpretation)-preserving . The crucial point here, though, is that adherence to a set of deductive rules of transformation is not a sufficient condition for rational belief; it is sufficient (and necessary) only for producing derivations in some formal system or other. Real rules of inference are rules (better : policies) guiding belief fixation and revision. Indeed, if one is sufficiently simple-minded, one can even substitute for the phrase ” good rules of inference”, the phrase ‘I( rules of) scientific procedure” or even “scientific method”. And, of tour se, there is no clear sense to the phrase “good rules of transformation”. (Unless ffgoodff here means ffcompleteff - but with respect to what? Truth? > Given this conception of the problem to which Doyle-McDermott and Reiter are addressing themselves, certain of the strange properties of, on the one hand, non-monotonic logic and on the other, the logic of default reasoning, are only to be expected. In particular, the fact that the proof relation is not in general decidable. The way the “Mfl operator is understood, we believers are represented as follows: to make an assumption that p or to put forth a presumption that p is to be1 iev e a proposition to the effect that p is consistent with everything that is presently believed and that it will remain so even as my beliefs undergo certain kinds of revisions. And in general we can prove that p only if we can prove at least that p is consistent with everything we now be1 iev e . But, of course, by Church’s theorem there is no uniform decision procedure for settling the question of the consistency of a set of first-order formulae . (Never mind that the problem of determining the consistency of arbitrary sets of formulae of the sentential calculus is NP-complete . > This is surely wrong-headed : assumptions or hypotheses or presumptions are not propositions we accept only after deciding that they are compatible with everything else we be1 iev e , not to speak of having to establish that they won’t be discredited by future evidence or further reasoning. When we assume p, it is just p that we assume, not some complicated proposition about the semantic relations in which it stands to all our other beliefs, and certainly not some complicated belief about the syntactic relations any one of its linguistic expressions has to the sentences which express all those other beliefs. (Indeed, there is a problem with respect to the consistency requirement, especially if we allow be1 ief s about beliefs. Surely, any rational subject will believe that s/he has some false be1 iefs , or more to the point, any such subject will be disposed to accept that belief upon reflection. By doing so, however, the subject guarantees itself an inconsistent belief-set; there is no possible interpretation under which all of its beliefs are true. Should this fact by itself worry it (or us?) .) After Reiter has proved that the problem of determining whether an arbitrary sentence is in an extension for a given default theory is undecidable, he comments: (A)ny proof theory whatever for... the facts? (Are the rules provably sound rules of transformation?) Or are the conclusions legitimate because they constitute essential (non-redundant) parts of the best of the competing explanatory accounts of the original data; the best by our own, no doubt somewhat dim, lights? (Are the rules arguably rules of rational acceptance?) At the conclusion of his paper, McCarthy disambiguates and opts for the right reading. In the context of an imaginative discussion of the Game of Life cellular automaton, he notes that "the program in such a computer could study the physics of its world by making theories and experiments to test them and might eventually come up with the theory that its fundamental physics is that of the Life cellular automaton. We can test our theories of epistemology and common sense reasoning by asking if they would permit the Life-world computer to conclude, on the basis of its experiments, that its physics was that of Life." McCarthy continues: default theories must somehow appeal to some inherently non semi-decidable process. [That is, the -proof-relation, not just the proof predicate, is non recursive; the proofs, not just the theorems, are not Why such a beast recursively enumerable. is to be called a logic is somewhat beyond me - DI.1 This extremely pessimistic result forces the conclusion that any computational treatment of defaults must necessarily have an heuristic component and will, on occasion, lead to mistaken beliefs. Given the faulty nature of human common sense reasoning, this is perhaps the best one could hope for in any event. Now once ag ain substitute in the above "(scient ific or common sense) reasoni ng I1 for "defaulted and then reflect on how odd it is to think that there could be a purely proof-theoretic treatment of scientific reasoning. A heuristic treatment, that is a treatment in terms of rational epistemic More generally, we can imagine a metaphilosophy that has the same relation to philosophy that metamathematics has to mathematics. Metaphilosophy would study mathematical (? - D.1.) systems consisting of an 'fepistemologist'f seeking knowledge in accordance with the epistemology to be tested and interacting with a ffworldff. It would study what information about the world a given philosophy would obtain. This would depend also on the structure of the world and the ffepistemologist'sff opportunities to interact. AI could benefit from building some very simple systems of this kind, and so might philosophy. policies, is not just the best we could hope for. It is the only thing that makes sense. (Of course, if we are very fortunate a "syntactic" encoding we may be able to develop f these policies; but we 0 certainly mustn't expect to come up with rules for rational belief fixation that are actually provably truth-preserving. Once again, the only thing that makes sense is to hope to form ulate a set of rules which, from within our current theory of the world of ourselves as both objects within and inouirers about that world, can be argued to embody rational policies for extending our admittedly imperfect grasp of things.) Inference (reasoning) is non-monotonic: New information (evidence) and further reasoning on old beliefs (including, but by no means limited to, reasoning about the semantic relationships - e.g., of entailment - among beliefs) can and does lead to Amen; but might I note that such a metaphilosophy does exist. Do some substituting again: for " hilosophyf' P (except in its last occurrence), substitute ff~~iencef'; for 'fepistemologistff, ffscientistff; for ffepistemologyff, either "philosophy of science" or "scientific methodology". The moral is, I hope, clear. Here is my constructive proposal: AI researchers interested in "the epistemological problem" should look, neither to formal semantics nor to proof-theory; but to - of all things - the philosophy of science and epistemology. the revision of our theories and, of course, to revision bv f'subtractionff as well as by ffaddition'f. Entailment- and derivability are monotonic. That is, logic - the logic we have, know, and - if we understand its place in the scheme of things - have every reason to love, is monotonic. BRIEF POSTSCRIPT I've been told that the tone of this paper is REFERENCES overly critical; or rather, that it - lacks constructive content. A brief postscript is not [IlMcDermott, D., Doyle, J. "Non-Monotonic Logic I" , AI Memo 486, MIT Artificial Intelligence Laboratory, Cambridge, Mass., August 1978. C2lMcDermott. D. "Non-Monotonic Logic II", Research Report 174, Yale University Department of Computer Science, New Haven, Conn., February 1980. the appropr iate locus for correcting this defect; but it may be an appropri ate place for cas ting my vote for a suggestion made by John McCarthy. In his "Epistemological Problems of Artificial Intelligence" [41. McCarthy characterizes the epistemological part of "the AI problem" as follows: "(it) studies what kinds of facts about the world are available to an observer with given opportunities to observe, how these facts can be represented in the memory of a computer, and what rules permit legitimate conclusions to be drawn - - from these facts." [Emphasis added.] ThisTthough brief, is just about right, except for a perhaps C3lReiter, R. "A Logic for Default Reasoning", Technical Report 79-8, University of British Columbia Department of Computer Science, Vancouver, B.C., July 1979. 143McCarthy' J. "Epistemological Problems of studied ambiguity in that final clause. Are the Artificial Intelligence",-In Proc. IJCAI-77. Cambridge, Mass., August, 1977, pp. 1038-1044. conclusions legitimate because they are entailed by 101
1980
13
5
PATHOLOGY ON GAME TREES: A SUMMARY OF RESULTS* Dana S. Nau Department of Computer Science University of Maryland College Park, MD 20742 ABSTRACT Game trees are widely used as models of various decision-making situations. Empirical results with game-playing computer programs have led to the general belief that searching deeper on a game tree improves the quality of a decision. The surprising result of the research summarized in this paper is that there is an infinite class of game trees for which increasing the search depth does not improve the decision quality, but instead makes the decision more and more random. I INTRODUCTION - Many decision-making processes are naturally modeled as perfect information games between two players [3, 71. Such games are generally represented as trees whose paths represent various courses the game might take. In artificial intelligence, the well-known minimax procedure [2, 71 is generally used to choose moves on such trees. If a correct decision is to be guaranteed using minimaxing, substantial portions of the game tree must be searched, even when using tree- pruning techniques such as alpha-beta [2, 71. This is physically impossible for large game trees. However, good results have been obtained by searching the tree to some limited depth, estimating the minimax values of the nodes at that depth using a heuristic evaluation function, and computing the minimax values for shallower nodes as if the estimated values were correct [2, 71. There is almost universal agreement that when this is done, increasing the search depth increases the quality of the decision. This has been dramatically illustrated with game-playing computer programs [l, 8, 91, but such results are purely empirical. the The author has developed a mathematical ory modeling the effects of search depth on the * This work was supported in part by a National Science Foundation graduate fellowship, in part by a James B. Duke graduate fellowship, and in part by N.S.F. grant number ENG-7822159 to the Laboratory for Pattern Analysis at the University of Maryland. The results discussed in this paper are presented in detail in the author's Ph.D. dissertation [4]. probability of making a correct decision. This research has produced the surprising result that there is an infinite class of game trees for which as long as the search does not reach the end of the tree (in which case the best possible decision could be guaranteed), deeper search does not improve the decision quality, but instead makes the decision more and more random. For example, probability of search depth 0 2 4 6 8 10 12 14 16 FIGURE l.--Probability of correct decision as a function of search depth on the game tree G(l,l), for five different evaluation functions. On G(l,l), the probability of correct decision is 0.5 if the choice is made at random. For each of the five functions, this value is approached as the search depth increases. 102 From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved. Figure 1 illustrates how the probability of correct decision varies. Section 2 of this paper summarizes the mathematical model used in this research, Section 3 presents the main result, and Section 4 contains concluding remarks. II THE MATHEMATICAL MODEL -- Let G be a game tree for a game between two players named Max and Min. Nodes where it is Max's or Min's move are called max and min nodes, respectively. Assume that G has no draws (this restriction can easily be removed, but it simplifies the mathematics). Then if G is finite, every node of G is a forced win for either Max or Min. Such nodes are called "+I' nodes and 1(-11 nodes, respectively. If G is infinite, not every node need be a "+,. or II- II node, but the "+" and M-n labeling can easily be extended to all nodes of G in a way which is consistent with all finite truncations of G. Correct decisions for Max and Min are moves leading to "+" and u-11 nodes, respectively. "+" max nodes (which we call S nodes) may have both "+" and 1(-(1 children; "+" min nodes (T nodes) have only "+" children; *-II min nodes (U nodes) may have both "+" and 11-(1 children; and --II max nodes (V nodes) have only 11-11 children. Thus it is only at the S and U nodes that it makes a difference what decision is made. These nodes are called critical nodes. An evaluation function on G may be any mapping e from the nodes of G into a set of numbers indicating how good the positions are estimated to be. For computer implementation, the range of e must be finite. We take this finite set to be fO,l,...,r}, where r is an integer. Ideally, e(g) would equal r if g were a "+" node and 0 if g were a 11-u node, but evaluation functions are usually somewhat (and sometimes drastically) in error. Increasing the error means decreasing e(g) if g is a "+" node and increasing e(g) if g is a 11-11 node. Thus if we assume that the errors made by e are independent and identically distributed, the p.d.f. f for the values e returns on "+" nodes is a mirror image of the p.d.f. h for the values e returns on 1)-11 nodes; i.e., f(x) = h(r-x), x = O,l,...,r. f may be represented by the vector P= (f(O),f(l),...,f(r>>, which is called the probability vector for e. --- 111 RESULTS -- The probability vector for e induces probability vectors on the minimax values of the nodes of G, and the probability of making a correct decision at any critical node g of G is a function of the probability vectors for the minimax values of the children of g. This probability is thus determined by the structure of the subtree rooted at g, and little can be said about it in general. However, if G has a sufficiently regular structure, the properties of this probability can be analyzed. Let m and n be positive integers, and let G(m,n) be the unique game tree for which 1. the root is an S node (this choice is arbitrary and the results to follow are independent of it); 2. each critical node has m children of the same sign and n children of opposite sign; T . . . T U . . . U U . . . U m+n A-A . . . . . . FIGURE 2.--The game tree G(m,n). Min nodes are indi- cated by the horizontal line segments drawn beneath them. 103 3. every node has m+n children. G(m,n) is illustrated in Figure 2. If moves are chosen at random on G(m,n), the probability that a correct choice is made at a critical node is obviously m/(m+n). If the choice is made using a depth d minimax search and an evaluation function with probability vector P, it is proved [4, 51 that the probability that the decision is correct depends only on m, n, P, and d. We denote this probability by &,n(P,d). The trees G(m,n) have the following surprising property. Theorem 1. For almost every* probability ~- vector P and for all but finitely many values of m and n, lim &,n(P,d) = m/(m+n). d-3 co Thus, as the search depth increases, the probability of correct decision converges to what it would be if moves were being chosen at random. This pathological behavior occurs because as the search depth increases it becomes increasingly likely that all children of a critical node receive the same minimax value, whence a choice must be made at random among them. Figure 1 illustrates Theorem 1 on the game tree G(l,l), using five different values of P. The significance of Theorem 1 for finite games is that infinitely many finite games can be generated by truncating G(m,n) in whatever way desired. Deeper search on these trees will yield increasingly random decisions as long as the search does not reach the end of the tree. Additional theoretical and experimental results reported elsewhere [4, 5, 61 provide additional information about which of the G(m,n) are pathological and why. Theorem 1 almost certainly extends to a much larger class of game trees, but the irregular structure of most game trees would require a much more complicated proof. IV CONCLUSIONS - The author believes that the pathology of the trees G(m,n) indicates an underlying pathological tendency present in most game trees. However, in most games this tendency appears to be overridden by other factors. Pathology does not appear to occur in games such as chess or checkers [l, 8, 91, but it is no longer possible blithely to assume (as has been done in the past) that searching deeper will always result in a better decision. 1. 2. 3. 4. 5. 6. 7. 8. 9. REFERENCES Biermann, A. W. Theoretical issues related to computer game playing programs. Personal Comput. (Sept. 1978), 86-88. Rnuth, D. E., and Moore, R. W. An analysis of alpha-beta pruning. Artif. Intel. 6 - - (1975), 293-326. LaValle, I. H. Fundamentals of Decision Analysis. Holt, Rinehart and Winston, New York, 1978. Nau, D. S. Quality of decision versus depth of search Dissertation, Dull Uni~?m~Aug~~~eSb79). Ph.D. Nau, D. S. Decision quality as a function of search depth on game trees. Tech. Report TR-866, Computer Sci. Dept., Univ. of Md. (Feb. 1980). Submitted for publication. Nau, D. S. The last player theorem. Tech. Report TR-865, Computer Sci. Dept., Univ. of Md. (Feb. 1980). Submitted for publication. Nilsson, N. J. Problem-Solving Methods in Artificial Intelligence. --ram NG York, 1971. Robinson, A. L. Tournament competition fuels computer chess. Science 204 (1979), 1396-l 398. Truscott, T. R. Minimum variance tree searching. Proc. First Int. Symp. on Policy Anal. and IX------- Syst. (1979X 203-209.- - - * A property holds for almost every member of a set if it holds everywhere but on a subset of measure zero. Thus for any continuous p.d.f. on the set, the probability of choosing a member of the set to which the property does not apply is 0. 104
1980
14
6
ISEOC9PFREE S. W. Ng and Adrian Walker Work performed at Rutgers University* ABSTRACT If a system uses assertions of the general form x causes y , (e.g. MYCIN rules) then loop situations in which X, causes X2, X2 causes X3, . . . . , X, causes X,, are, intuitively, best avoided. If an assertion has an attached confidence weight, as in x (0.8)-causes y , then one can choose to say that the confidence in a chain of such assertions is as strong as the weakest link in the chain. If there are several chains of assertions from X to Z, then one can choose to say that X causes Z with a confidence equal to that of the strongest chain. From these cfioices, it follows that the confidence that X causes Z corresponds to a loop-free chain of assertions. This is true even if there are chains from X to Z with common subchains and loops within loops. An algorithm for computing the confidence is described. I INTRODUCTION and TERMINOLOGY There is currently considerable interest in representing knowledge about a practical situation in the form of weighted cause-effect or situation-action rules, and in using the knowledge so represented in decision- making systems. For example, in medical decision mak- ing systems, the rules may represent causal trends in a disease process in a patient [6], or the rules may represent trends in the decision process of a physician who is diagnosing and treating a patient [2,4]. In such representations, the chaining together of rules can be written as a weighted, directed graph. In MYCIN [2] the graphs are like and-or trees, while in OCKHAM [3,4,5] the graphs may have loops. This paper presents a result which appears in [l]. From the result it follows that, using the max and min operations, a graph con- taining loops can be interpreted as though it were loop- free. *Authors’ pres ent addresses: S. W. Ng, 6F Wing Hing Street, Hong Kong. Adrian Walker, Bell Laboratories, Murray Hill, NJ. The kind of graph in question is a stochastic graph (sg) consisting of nodes N = {1,2,...,n) and a function P from N X N to the real numbers W, 0 5 w 11. P is such that, for each i E N, 2 P(i,j) 5 1. If P(i,j) = W, then w J=i is the weight of the arc from node i to node j. A path in an sg is a string n, . . nl c N+ such that P(nk,nk+,) > 0 for lsk-cl. n2,...,ni-l are intermediate nodes of . . . q. A path n, . ;dop if n, = n nl of a graph is said to have a J for some i.j such that either 1 li < j <I or I <i < j I I. Otherwise the path is loop-free. The weight of a path n, n2 . nl of an sg is the minimum over 1 <i < I of the weight of the arc from n, to n,+l. The k-weight w: from node i to node j of a graph, is the maximum of the weights of all the paths from i to j having no intermediate node with number higher than k. The weight w,/ from node i to node j of an sg is w,;. II EXAMPLES This section gives examples of potential causal loops, in MYCIN [2] and in OCKHAM [3,4,5], and it shows how these loops are avoided by the use of the maximum and minimum operations. A. A MYCIN Example Consider a set of MYCIN rules B A C (l.O)- A B (l.O)- D D V E (0.5)- B G AH (0.5)- B and suppose that C, E, G, and H are known with confidences 0.9, 0.8, 0.5, 0.4, respectively. Writing C(X) for the confidence in X, confidences propagate through rules by: 105 From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved. c(Z) = w . max(c(X),c(Y)) for xv Y (w)+ 2 III ALGORITHM and RESULTS and, c(Z) = w . min(c(X),c(Y)) for x A Y (w)-2. The greatest confidence which can be computed in A is c (A ) = 0.4 by the tree A+((Bt-(D+(B*G AH)) V E) A C) B occurs twice, so the tree can be thought of as a graph with a loop. However, the value of C(A) depends only on the loop-free path EBA. B. An OCKHAM Example The following set of OCKHAM [3,4,5] rules is intended to show a strategy of a person who is deciding whether to stay at home, go to the office, or to try for a standby flight to go on vacation. The external factors project deadline, sno&torm, project completed, and another flight influence the choice by placing the arc(s) so labelled in a stochastic graph. The rules are: HOME (project deadline, 1 .O)+ OFFICE OFFICE (snowstorm, 0.5)--+ HOME OFFICE (project completed, 0.5)-+ AIRPORT-STANDBY AIRPORT-STANDBY (another flight, 0.25)+ AIRPORT-STANDBY AIRPORT-STANDBY (snowstorm, 0.75)-+ HOME These rules make up a stochastic graph with nodes HOME, OFFICE, and AIRPORT-STANDBY. If all of the external factors project deadline, snowstorm, project com- pleted, and another flight are true, then the graph has five arcs and multiple loops. If the weight from HOME to AIRPORT-STANDBY is considered, then it turns out to be 0.5. The corresponding path, HOME-OFFICE-AIRPORT-STANDBY, is loop-free. The algorithm MAXMIN, shown below, computes the weight from a given node to another node in an sg. Note that, by Step 2, MAXMIN runs in o(n3) time. MAXMIN Input: A stochastic graph of n nodes Output: n2 real numbers Step 1: for 1 5 i,j 5 n do B,,O := P(i,j) Step 2: for k:=l to n do for 1 1. i,j I n do B{ := max (BG-‘, min (B,kk-‘, BjJ-‘)) Step 3: for 1 I ij I n do output Bl The properties of paths, path weights, and the values Blf, described in the Lemma below, are esta- blished in Appendix I. Lemma In an sg of n nodes, the following hold for 1 5 i,j 5 n andforork (n: statements (i) If WI; > 0, then there exists i to j whose weight is w;, a loop-free path from (ii) B: = WI:. Setting k=n in parts (i) and (ii) of the Lemma yields the two results: Result I In any sg the weight wlJ, that is, the maximum path weight over all paths from i to j, is equal the max- imum over only the loop-free paths from i to j. Result 2 If MAXMIN receives as input an sg with n nodes, then, for 1 I ij I n, the output Bl; is equal the weight wi, from node i to node j of the graph. Result 1 establishes a property of any sg, namely that the weight from one node to another is the weight of some loop-free path, while Result 2 establishes that MAXMIN is one possible algorithm for finding such weights. 106 This is because (A) and (B) exhaust all possibilities. IV CONCLUSIONS In a system in which weighted causal assertions can be combined into causal paths and graphs, causal loops can occur. Common sense about everyday causality suggests that such loops are best avoided. If the weight of a path is chosen to be the minimum of the individual arc weights, and the net effect of a start node on a final node is chosen to be the maximum of the path weights from the start node to the final node, then the weights (by whatever algorithm they are computed) are independent of the presence of loops in the underlying graph. There is a simple O(n3) algorithm to compute these weights. ACKNOWLEDGEMENT Thanks are due to A. Van der Mude for helpful comments. APPENDIX I Proof of Lemma Let k = 0. If wij > 0, then by definition, there exists a path from i to j having no intermediate nodes, whose weight is wiio. Clearly this path is ij, which is loop-free. So we may write r,$’ = ij, where 76 denotes a path from i to j having no inter- mediate node with number greater than k. Then WijO = P (i j) = Biio. If wi! = 0, then there is no such path, and BijO = 0. Suppose, by way of inductive hypothesis, that for 1 5 ij I n and for some (k-l) -c n, (i) if wi’f-’ > o then there is a loop-free path r$-‘, from i to j with each intermediate node at most k-l, whose weight is w,$-l, and (ii) B,!j-l = ~4~~. If w,$-’ > o then there is a path 7 from i to j whose weight is w$. y is such that either (A) each intermediate node of 7 is at most (k-l), or (B) y goes from i to k; from k to k some number of times, then from k to j, with each intermediate node of each subpath being at most (k-l). In case (A) it is clear that wlf = WI:-‘, and the induc- tive step for part (i) of the Lemma is completed with k- -f?I k-‘. - Y,, In case (B), it follows from our induction hypothesis that there exist loop-free paths r,kk-‘, r&‘, rkk,-’ with weights wi-‘, wrtk- ’ , wkJ k-’ respectively. Let w = min(w,i-‘,wl,-‘) and w’ = w,&‘, and consider the sub- cases (Bl) in which y goes from k to k zero times, and (B2) in which y goes from k to k one or more times. In (Bl) the weight of y is clearly W, while in (B2) it is min(w ,ti). Hence, from the definition of WI:, we have wif = max(w ,min (w ,ti)), which is simply w. So part (i) of the Lemma holds with 7,: = r,kk-’ riJ-‘. From part (ii) of the inductive hypothesis, and from Step 2 of the MAX- MIN algorithm, it follows that B; = max(wJ-‘,w). So Bk = max(wk-’ wkj = wk since it dlfinition zf k{that ‘z; zz w,:-‘. follows from the So in either of the cases (A) and (B) B[ = w,f, which establishes part (ii) of the Lemma for the case W; > O. If w; = 0 then there is no path from i to j. Suppose Bt f 0. Then either w,t-’ # 0, or both of w;-‘, wiJ-’ are nonzero. In each case there is a path from i to j, a con- tradiction. So if wlf = o then B: = w:. q HI PI [31 [41 [51 h5 REFERENCES Ng, S. W., and A. Walker “Max-min Chaining of Weighted Assertions is Loop-free”, CBM-TR-73, Dept. of Comp. Sci., Rutgers University, 1977. Shortliffe, E. Computer Based Medical Consulta- tions: MYCIN. American Elsevier, 1976. Van der Mude, A. and A. Walker “On the Infer- ence of Stochastic Regular C-7dmmars” Information and Control, 38:3 (1978) 310-329. Walker, A. “A Framework for Model Construc- tion and Model Based Deduction in a System with Causal Loops” In Proc. Thi ,i Illinois Conf. Med. Info. Syst., 1976. Walker, A. “On the Induction of a Decision Mak- ing System from a Data Base”, CBM-TR-80, Dept. of Comp. Sci., Rutgers University, 1977. Weiss, S. “Medical Modeling and Decision Mak- ing”, CBM-TR-27, Dept. of Comp. Sci., Rutgers University, 1974. 107
1980
15
7
Applying General Induction Methods to the Card Game Eleusis Thomas G. Dietterich Department of Computer Science Stanford University Stanford, CA 94305 Abstract Research was undertaken with the goal of applying general universally-applicable induction methods to complex real-world problems. The goal was only partially met. The chosen domain-the card game Eleusis-was still somewhat artificial, and the universally-applicable induction methods were found to be lacking in important ways. However, the resulting Eleusis program does show that by using knowledge-based data interpretation and rule evaluation techniques and model-fitting induction techniques, general induction methods can be used to solve complex problems. Introduction Work in the area of computer induction is characterized by a continuum from general, universally-applicable methods [5, 6, 7, 9, 10, 121 to specific, problem-oriented methods [2, 8, 111. The general-purpose methods have been criticized for lacking the power to operate in real-world domains. Problem-oriented methods have been criticized for being too specialized to be applied to any problems outside their original domains. This paper describes an attempt to bridge this gap by applying general-purpose induction algorithms to the problem of inducing secret rules in the card game Eleusis. Further details are available in [3]. A Program for Eleusis Eleusis (developed by Robert Abbott [l, 41) is a card game in which players attempt to induce a secret rule invented by the dealer. The secret rule describes a linear sequence of cards. In their turns, the players attempt to extend this sequence by playing additional cards from their hands. The dealer gives no information aside from indicating whether or not each play is correct. Players are penalized for incorrect plays by having additional cards added to their hands. The game ends when a player empties his hand. A record of the play is maintained as a layout (Figure 1) in which the top row, or mainline, contains all of the correctly-played cards in sequence. Incorrect cards are placed in side lines below the main line card which they follow. mainline: 3H QS 4C JD 2C 10D 8H 7H 2C 5H sidelines: JD AH AS IOH 5D 8H 10s QD Rule 1: “If the last card is odd, play black, if the last card is even, play red.” Figure 1. Sample Eleusis Layout (after El]). This research sought to develop a program which could serve as an intelligent assistant to a human Eleusis player. The program needed to be able to: ) discover rules which plausibly describe the layout, ) accept rules typed by the user and test &hem against the layout, ) extend the layout by suggesting cards to be played from the player’s hand. Although Eleusis is artificial and noise-free, it is sufficiently complex to provide a reasonable test bed for inductive techniques. The development of an intelligent assistant required not only basic induction methods but also extensive deduction techniques for testing rules and extending the layout. Problems with Existing Induction Methods While designing the rule-discovery portion of the Eleusis program, existing induction algorithms [5, 6, 7, 9, 10, 121 were examined and found to be lacking in three fundamental ways. The first major problem with some of these algorithms is their emphasis on conjunctive generalizations. Many Eleusis rules are disjunctive. For example, Rule 1 can be written as: tli {odd(cardi-1) A black(cardi) V even(cardi-1) A red(cardi)} The second major problem with these algorithms is that they make implicit assumptions concerning plausible generalizations-assumptions which are not easily modified. Of the algorithms examined, only Mitchell’s version space algorithm [lo] maintains information concerning all rules consistent with the data (and his algorithm is still oriented toward conjunctive generalization). The algorithms of Hayes-Roth and Vere both seek the most specific rule consistent with the data, while Michalski’s Aq algorithm seeks a disjunctive description with the fewest conjunctive terms. In contrast, the plausibility heuristics for Eleusis are: Choose rules with intermediate degree of generality. (Justification: the dealer is unlikely to choose a rule which is overly general because it would be too difficult to discover. Conversely, overly specific rules are easily discovered because they lead to the creation of numerous counter-examples during play.) Choose disjunctive rules based on symmetry. (Justification: Rule 1 is an excellent example of a symmetric disjunctive rule. Most often in Eleusis, the terms of a disjunction define mutually exclusive cases which have some symmetric relationship to each other. The dealer is very likely to choose such rules because they are not too hard-nor too easy--to discover.) (These plausibility heuristics are based on the assumption that the dealer is rational and that he is attempting to maximize his own score (according to the rules of the game). This is an artificial assumption. It is very rare in science that we have such insight into nature. However, in all domains plausibility criteria must be available-otherwise, we don’t know what we are searching for.) The third major problem with using general-purpose induction techniques in Eleusis is that the raw data of the Eleusis layout are not in a form suitable for generalization. (Many researcliers [2, 111 have pointed out this problem in other domains.) One aspect of this problem is evident in Rule 1: neither color nor parify is explicit in the representation of the cards. Another difficulty is that the sequential ordering of the cards is implicit in their position in the layout. It must be made explicit in order to discover rules like Rule 1. Two techniques were developed to address these problems. First, in order to avoid an exhaustive search of rule space and at the same 218 time avoid the “tunnel vision” of existing algorithms, rule models were developed to guide the induction process. Secondly, in order to transform the input data info a form appropriate for generalization, a series of knowledge-based processing layers were created. Induction by Model-Fitting By analogy with traditional statistical time-series analysis, the program uses a model-fitting approach to induction. The term model denotes a syntactic or functional skeleton which is fleshed out by the induction algorithms to form a rule. In traditional regression analysis, for example, the model is the regression polynomial whose coefficients must be determined by induction from the data. Properly chosen models can strongly constrain the search required for induction. After looking at several Eleusis games, the following models were designed for the Eleusis program: B Decomposition. This model specifies that the rule must take the form of an exclusive disjunction of if-lhen rules. The condifion parts of the rules must refer only to cards prior to the card to be predicted. The action parts of the if-then rules describe correct plays given that the condition parts are true. The condition parts must be mutually exclusive conjunctive descriptions. The action parts are also conjunctions. Rule 1 fits the decomposition model: Vi odd(cardi-1) * black(cmdi) V even( cardi- 1) =P red(cardJ ) Periodic. A rule of this model describes the layout as a periodic function. For example, Rule 2 (Figure 2) is a periodic rule. The layout is split into phases according to the length of the proposed period. The periodic model requires that each phase have a conjunctive description. JC 4D QH 3s QD 9H QC 7H QD 9D QC 3H KC 5s 4s 10D 7s M phase 0: JC QH QD QC QD QC 5s 4s 1OD 7s phase 1: 0 4D 3s 9H 7H 9D 3H KC Rule 2: (periodic rule with length 2): phase 0: Vi faced(cardi) phase 1: tli nonfaced(cardi) Figure 2. A Periodic Rule. ) Disjunctive Normal Form (DNF) with fewest terms. The Aq algorithm (Michalski [9]) is used to discover rules which have the fewest number of separate conjunctions. The Aq algorithm was given heuristics to guide it towards symmetric, disjoint disjunctive terms. By definition, not all Eleusis rules can be represented using these three models. But, these models, when combined with segmentation (see below), cover all but one or two of the Eleusis rules which I have seen. For each of these models, an algorithm was developed to fit the data to the model. In order to fit the data to the decomposition model, the program must determine which variables to decompose on, i.e. which variables to test in the condition part of the rule (Rule 1 decomposes on parity E {odd, even)). The program must also decide how far into the past this decomposition should apply (i.e. do we look at just the most recent card, or the two most recent cards, . . . . etc.). Once the decomposition variables and the degree of lookback are determined, the algorithe must find a conjunctive description for the action parts of the rules. The program uses a generate-and-test approach. First, it considers rules which look back only one card, then two cards, and so on until a rule consistent with the data is found. To determine the 219 decomposition variable(s), it generates trial rules by decomposing on each variable in turn and chooses the variable which gives the simplest rule. If the resulting rule is not consistent with the data, the layout is decomposed into sub-layouts based on the chosen variable, and a second decomposition variable is again determined by generating trial decompositions and selecting the simplest. This process is repeated until a rule consistent with the data is found. (This is a beam search with a beam width of 1). In order to fit the periodic model, the program chooses a length for the period, splits the layout into separate phases, and finds a conjunctive description of each phase. Since the rule is more plausible if the descriptions of each pha’se are mutually exclusive, the algorithm attempts to remove overlapping conditions in the descriptions of the different phases. Again, a generate-and-test approach is used to generate periodic rules with different length periods (from length 1 upwards) until an acceptable rule is discovered. The Aq aigorithm is used to fit dati to the DNF model. Knowledge-layer Structure Like many other AI systems, the Eleusis program is structured as a set of layers, or more accurately, rings, based on the kinds of knowledge used to solve the problem (Figure 3). Each layer takes input data from the outside, transforms the data using knowledge appropriate to this layer, performs generalization by searching the space of rules at this level of abstraction, and evaluates the obtained rules. Rules which are sufficiently plausible are returned to the outer layers. Each layer calls the inner layers to perform tasks requiring knowledge appropriate to those inner layers. Figure 4 shows the five layers of the Eleusis program. Notice that in Eleusis, the outermost ring is very specific to Elcusis, while the inner-most rings contain only the general model-fitting induction algorithms. This design is intended to allow the program to be easily applied to similar problems. Since all Eleusis-specific knowledge is in the outer-most two layers, these could be stripped off and replaced by layers which apply different kinds of data transformations to solve different problems (e.g. letter series completion, sequence extrapolation). Figure 3. The Knowledge-layer Scheme. 5 User Interface 4 Eleusis KnowledPe , ; Seg~ Sequential Analysis 1 Basic induction Most Specific CL Most General Figure 4. Layered Structure of Eleusis Program. The five layers fiinction as follows. The outer-most layer provides an interface for the user. Layer 4 transforms the layout by making explicit such things as the color, parity, prime-ness, and faced-ness of the cards. Layer 3 segments the layout. Segmentation is used to discover rules such as Rule 3 (Figure 5) which involve first splitting up the layout into segments according to some criterion (like constant color) and deriving a new layout based on the lengths of these segments. Layer 2 makes the order of the events explicit [4] Gardner, Martin, “On Playing the New EIeusis, the game that either by creating “first difference” variables (e.g. Avalue(caQ = simulates the search for truth,” Scientific American, 237, value(c&$ - value(cardi_l)) or by breaking the layout into separate October. 1977, pp 18-25. phases (for periodic rules). The result of the preprocessing of layers 5 through 2 is that layer 1 is called with a specific model for which [S] Hayes-Roth, F:; J. McDermotf “An Interference Matching Technique for Inducing Abstractions”, Communicufions of fhe the degree of Zookback and (optionally) length of period have been specified and with a set of unordered events to which the model is to be fitted. Layer 1 actually performs the model-fitting using one of the three model-fitting induction algorithms. Once the rules have been induced, they are passed back up through the layers for evaluation. Layer 2 evaluates the rules using knowledge about ordering (e:g. guaranteeing that the rule doesn’t lead to a dead end). Layer 3 checks that the rules are consistent with the scgmcntation it performed (in particular, the boundary values cause some problems). Layer 4 evaluates the rules according to the heuristics for plausible rules in Elcusis. Finally, layer 5 prints any rules which survive this evaluation process. ACM, 21:5, 1978, pp. 401-410. [6] Hunt, E.B.. Experiments in Induclion, Academic Press, 1966. f’7] Larson, J., “Inductive Inference in the Variable Valued Predicate Logic System VL21 : Methodology and Computer Implementation”, Rept. No. 869, Dept. of Comp. Sci., Univ. of III., Urbana, May 1977. [8] Lenat, D., “AM: An Artificial Intelligence Approach to Discovery in Mathematics as Heuristic Search,” Comp. Sci. Dept., Rept. STAN-CS-76-570, Stanford University, July 1976. 191 Michalski. R. S., “Algorithm Ag for the Quasi-Minimal Solution of the Covering Problem,‘* Archiwum -Automafyki i Telemechaniki, No. 4, Polish Academy of Sciences, 1969 (in Polish). [lo] Mitchell, T. M., “Version Spaces: an Approach to Concept Learning,” Comp. Sci. Dept. Rept. STAN-CS-78711, Stanford University, December 1978. I AH 7C 6C 9s 10H 7H 1 OD JC AD 4H 8D 7C KD 6s QD 3s I JH I Rule 3: “Play odd-length strings of cards where color is constant within each string.” 1 The segmenled layout looks like this (color, length): (red, 1) (black, 3)’ (red, 3) (black, 1) (red, 3) Figure 5. A Segmentation-based Rule. The program works well. The three rule models, when combined with segmentation, span a search space of roughly 1O183 possible rules (several control parameters affect the size of this space). The program generates and tests roughly 19 different parameterizations of the three models in order to choose three to five plausible rules. It runs quite quickly (less than seven seconds, on a Cyber 175, in the worst case so far). The rules developed are similar to those invented by humans playing the same games (15 complete games have been analyzed). Conclusion General induction techniques can be used to solve complex learning tasks, but they form only part of the solution. In the Eleusis domain, data interpretation, rule evaluation, and model-directed induction were all required to develop a satisfactory program, A degree of generality was obtained by segregating the functions of the program into layers according to the generality of the knowledge they required. This should allow the program to be applied to similar tasks merely by “peeling off’ and replacing its outer layers. Acknowledgments Thanks go to R. S. Michalski, my M.S. thesis advisor, for suggesting Eleusis as a domain and for providing numerous ideas including the basic idea for the decomposition algorithm. Thanks also to David Shur for proofreading this paper. NSF grant no. MCS-76-22940. This research was supported by References [l] Abbott, Robert, “The New Eleusis,” Available from Abbott at Box 1175, General Post Office, New York, NY 10001 ($1.00). [2] Buchanan, B.G., D. H. Smith, W. C. White! R. J. Gritter, E. A. Feigenbaum, J. Lederberg, C. Djerassr, Journal of the American Chemical Society, 98 (1976) p. 6168. [ll] Soloway, E., “Learning = Interpretation + Generalization: a case study in knowledge-directed learning,” PhD Thesis, COINS TR 78-13, University of Massachusetts, Amherst, MA., 1978. [12] Vere, S. A., “Induction of Relational Productions in the Presence of Background Information,” In Proceedings of the Fifrh International Joint Conference on Artificial Intelligence, MIT, Cambridge, MA., 1977. [3] Dietterich, Thomas G., “The Methodology of Knowledge Layers for Inducing Descriptions of Sequentially Orderid Events,” MS Thesis, Dent of Corn. Sci., Univ. of Illinois, Urbana, October, 1979. - 220
1980
16
8
MODELLING STUDENT ACQUISITION OF PROBLEM-SOLVING SKILLS Robert Smith Department of Computer Science Rutgers University New Brunswick, N. J. 08903 ABSTRACT This paper describes the design of a system that simulates a human student learning to prove tneorems in logic by interacting with a curriculum designed to teach those skills. The paper argues that sequences in this curriculum use instructional strateu, and that the student recognizgs these strategies in driving the learning process. I. INTRODUCTION A central issue in the design of learning systems (LS's) is the classification of the sources of information that the system uses for its acquisition. The general notion is that an LS begins with certain knowledge and capabilities, and then extracts information from training sequences or experiments in the acquisition process. Implicit within this general characterization is the idea that the LS enforces some kind of interpretation on the training sequence by way of driving the acquisition process. Often the nature of the interpretation is left implicit. An example of an LS that makes somewhat explicit the interpretation of its training sequence is Winston's program for learning structure descriptions, where the program interprets the near m example as providing key information about the structure being learned [lr]. We speculate that much human learning takes place in a more richly structured environment, wherein the human learner is interpreting the instructional seauences provided to him in a richer way than LS's typically envision. Indeed, most LS's nave made few if any explicit assumptions about the structure of the environment in which the training sequence occurs. One particularly rich environment is learning & teaching. We suggest that teachers use certain instructional strategies in presenting material, and that students recognize these strategies. This paper describes the motivation for an LS called REDHOT. REDHOT is a simulation of a student -------- * I would like to thank Phyllis Walker for analysis of the curriculum and student protocols; Saul Amarel, Tom Mitchell, Don Smith, and N. Sridharan for many ideas and assistance. The research reported here is sponsored by the Office of Naval Research under contract N00014-79-C-0780. We gratefully acknowledge their support for this work. acquiring the skill of constructing proofs in elementary logic. We characterize this skill as consisting of (1) primitive operators in the form of natural-deduction rules of inference, (2) "macro moves" consisting of several rules of inference, and (3) application heuristics that describe when to use the rules. The central theme of this research is to model the acquisition of these skills around the recognition of instructional strategies in a curriculum designed to teach the student. II. CURRICULUM FOR REDHOT We are using the curriculum from the computer- assisted instruction (CA11 course developed at Stanford University by Patrick Suppes and co- workers. (See [3] for details.) This CA1 system is used as the sole mode of instruction for approximately 300 Stanford students each year. We chose a CA1 curriculum because we thought that the explicitness inherent in a successful CA1 system developed and tested over a number of years might make the instructional strategies relatively clear. The curriculum contains explanatory text, examples, exercises, and hints. The explanatory text is rather large, and uses both computer- generated audio and display as modes of presentation. The presentation strategy used by the actual CA1 system is linear through the curriculum. For use with REDHOT, we have developed a stylized curriculum in an artificial language CL. It contains the examples, exercises, partially formed rules, and hints. The exercises are the central part of the curriculum. There are approximately 500 theorems that the student is asked to prove, with about 200 in propositional logic, 200 in elementary algebra, and 100 in the theory of quantification. The human student performs these exercises by giving the steps of the developing proof to an interactive proof checker. This proof checker is the heart of the original instructional system. We developed a version of this proof checker for use with the REDHOT student simulation. III. THE DESIGN OF REDHOT REDHOT learns rules for proving theorems. These rules are initially the natural deduction 331 rules of many logic systems. The student improves upon these rules by building macro operators and by adding heuristics to existing rules--i.e., giving strategic advice in the left-hand-sides of the production rules.* For example, the rule AA ("affirm the antecedent", the system's version of modus ponens) can be stated as the following rule: Rule AA I GOAL: Derive Q I Prerequisites: f P already on some line i I P -> Q on some line j I Method: f AA command on lines i and j I Heuristics: None (yet) I Effectiveness: Perfect I In the above, we have adopted a style of rule presentation that is similar to the rules of a production system. The letters P and Q stand for arbitrary formulas, and i and j for arbitrary lines of the already existing proof. The aoal tells what the rule will produce. The mreauisites tell us that two formulas of the appropriate forms are needed, and the method gives the schematic command to the proof checker that will accomplish this. The heuristics associated with a rule are learned by the system, and indicate how the rule should be used. Effectiveness of the rule is also learned, and indicates how effective the rule will be in achieving its goal given its prerequisites. The effectiveness of this rule is "perfect" since the rule is given as such in the curriculum. The underlying problem solver for REDHOT is a depth-first, heuristic problem solving over the existing rules. It is assumed that: (a) the system has sufficient memory and CPU facilities for this search, if necessary; and (b) that the underlying pattern matchers are sufficient to match arbitrary formulas, line numbers, etc. Both of these assumptions are open to criticism on the grounds of being psychologically unrealistic. One of the goals of the construction of REDHOT is to decide on plausible ways to restrict the problem solver and pattern matcher to make a more realistic system. REDHOT learns the heuristics for the application of the rule. These heuristics are stated in a heuristic-language HL, which is strongly tied to the curriculum language CL. The heuristics associated with a rule guide the student as to when to try that rule or not to try it. For example, the curriculum appears to teach the student that the AA rule is a plausible thing to try when the prerequisites are available -------- * See [II for a conceptual discussion of the levels through which this process might proceed. One way to regard this research is a suggestion of the mechanism for this acquisition of heuristics and macro moves. (whether or not the goal is immediately desired). This is one of the primitives of the HL heuristics language. An example of a macro operator that is not a primitive rule is the "multiple AA" macro move. A paraphrase of this macro operator might be: I Multiple-AA Macro Move I I IF you want to prove Q : I AND I I have P, P -> P1, PI -> P2, . . . . Pn -> Q I : THEN I I make multiple AA applications, I I which is guaranteed to succeed I We discuss below t.he training sequence that teaches this macro move. IV. m RECOGNITION @ INSTRUCTIONAL STRATEGIES REDHOT bases its acquisition of application heuristics and macro operators on its recognition of instructional strategies in the training sequence. For example, consider the sequences of exercises in Figure I, taken from the actual curriculum. The sequence, which is at the beginning of the whole curriculum, gives the primitive rule of inference AA, then shows successive elaborations of the use of that rule of inference. REDHOT detects this to be a use of a strategy called focus a& elaborate, in which a rule is first focussed, and then a particular elaboration is given. Teacher: Here is a rule called AA. Teacher: Here are some exercises involving AA: 1. Goal: Q Premises: S -> Q, S 2-5 [Several more exercises with different formulas. 1 6. Goal: W Premises: S -> Q, Q -> W, S 7. Goal: S Premises: R -> S, Q -> W, W -> R, Q 899 [Two more similar exercises involving multiple applications of AA.] Figure _1_ Sequence of Exercises for Learning Multiple Application of AA Command In the above training sequence, REDHOT takes steps l-5 as focussing on the AA rule, and steps 6- 9 as providing a useful elaboration of that rule, in the form of the macro operator for multiple application. 222 A second example of the use of an instructional strategy concerns removing possible bugs in learned heuristics and macro operators. We illustrate this with the macro rule for conditional proof, a common strategy in logic and mathematics, which we paraphrase as follows: I Conditional Proof MACRO MOVE I 1 IF you want to prove P -> Q I I THEN I I ASSUME P as a working premise I I PROVE Q (likely from P); I I APPLY "CP" rule, removing premise I I The actual instructional sequence goes to great length to teach this principle, and students typically have a difficult time with the principle; one defective version of the rule that students seem to learn is the following: I "Defective" MACRO MOVE I I IF you have a formula P -> Q I I AND you want to prove Q I I THEN I I ASSUME P as a working premise; I I PROVE Q using AA; I This is a very poor strategy; but the evidence suggest that over half of the students learn it. The following exercise seems to help students debug the rule "Defective" . Derive: S -> (Q OR R) Premise: (R -> R) -> (Q OR R) In examining student protocols, we see that many students will try several times to apply the "Defective" rule to this exercise. Finally, (we speculate) they realize that (R -> R) is already something that they know how to prove, using a previously learned macro operator. Then, the actual proof becomes obvious, the student corrects the defective rule, and goes on to handle similar exercises correctly. We call this instructional strategy "focus and frustrate" wherein a student discovers --somewhat traumatically--that a rule he learned is defective. Therefore, an exercise such as the above is not just randomly selected, but instead tests possible student "bugs" in an exact way. Notice that it is one of the simplest exercises that will discriminate between the correct and defective formulations of the macro rule for conditionaL proof. (See [2] for a discussion of "debugging" student skills.) to V. REDHOT AND LEARNING SYSTEMS Like many LS's, REDHOT starts wi th the ab ility state everything it will "learn" I in some sense at least. The initial rules for solving the problem (the natural deduction rules for logic) are complete with respect to the underlying problem solver --unless it is restricted in time/space (in practice it is). The heuristic and macro languages are also given in advance, and they of course define a space of the possible rules that might be learned. So, the object is to select among heuristics and macro rules in this space. One way to formulate doing this is by experimentation or exploration. REDHOT selects objects from this meta-space by being taught. Learning by being taught consists of the "teacher" laying out exercises in an organized and structured way, and the student recognizing something of that structure. The student makes-- believes that he is entitled to make--fairly bold hypotheses about the rules he is learning, and relies on the training sequence to contain exercises that will check for common errors that he the student may have made in formulating these rules. REDHOT compares somewhat radically to many LS's that rely on a somewhat slow, computationally coherent delimitation of the rule (or concept) involved. We speculate that learning by "discovery" or "experimentation" is a slow process for even humans, done over the eons of time and through social interaction. Most human learning is by being taught, and one can argue that AI should give attention to the relation between learning and teaching, in terms of modelling the acquisition of concepts, problem-solving skills, and natural language. We further speculate that learning by "discovery" will be aided by extracting as much information as possible from the structure of the environment in which the LS operates. IllI L-21 [31 II41 REFERENCES Amarel, Saul, "An Approach to Problem Solving and Theorem Proving in the Propositional Calculus", in Svstems and Computer Science, (Hart and Takasu, eds.), Toronto: University of Toronto Press, 1967. Brown, John Seely, Burton, Richard R., and Larkin, Kathy M., "Representing and Using Procedural Bugs for Educational Purposes", in Proceedings of the 1977 Annual Conference of the Association for Computing Machinerv, New York, 1977. Suppes, P., Smith, R. L., and Beard, M., "University-level Computer-assisted Instruction at Stanford: 1975", in Instructional Science, 1977, 4, 151-185. Winston, Patrick Henry, "Learning Structural Descriptions from Examples", Ph.D. thesis, in The Psvchology of Commuter Vision, (Patrick Henry Winston, ed.), McGraw-Hill, New York, 1975. 223
1980
17
9
A Computer Model of Child Language Learning Mallory Selfridge Yale University Abstract A computer program modelling a child between the ages of 1 and 2 years is described. This program is based on observations of the knowledge this child had at age 1, the comprehension abilities he had at age 2, and the language experiences he had between these ages. The computer program described begins at the age 1 level, is given similar language experiences, and uses inference and learning rules to acquire comprehension at the age 2 level. Introduction This paper describes a computer model of the development of comprehension abilities in a child, Joshua, between the ages of one and two years. The program begins with the kind of knowledge that Joshua had at age 1, when he under- stood no language, and learns to understand com- mands involving action, object, and spatial rela- tion words at Joshua's age 2 level. It does so by being given the kind of language experiences Joshua had between the ages of 1 and 2, and making use of rules to 1) infer the meaning of utter- ances, 2) attend to words, and 3) learn language meaning and structure. The program passes through a reasonable developmental sequence and makes the same kind of errors that children make at inter- mediate stages. This work suggests that language learning to the 2 year old level can be accounted for primari- ly by the learning of word meaning and structure, that world knowledge is crucial to enable the child to infer the meaning of utterances, and that children hear language in situations which enable them to perform such inferences. The success of the program in modelling Joshua's language development -- both its progression and its errors -- suggests that it embodies a plausible theory of how Joshua learned to understand language. While there are several aspects of the model which are unrealistic (for example, segmented input, no am- biguous words, no simultanious conceptual develop- ment), there is reason to believe that future work can sucessfully address these issues. Further de- tails can be found in Selfridge (1980). This paper first considers Joshua's initial state of knowledge at age 1, and then his comprehension abilities at age 2. It describes the kind of language experiences he had, and several kinds of learning rules which can account for Joshua's development. The computer program incor- porating these observations and rules * described, and finally some conclusions ai: presented. Joshua's Initial Knowledge The first component of --- a computer model of the development of Joshua's comprehension is Joshua's knowledge prior to his language learning. Observations like the follow- ing suggest that Joshua had considerable knowledge of objects, actions, spatial relations, and ges- tures at age 1 (ages are given in YEARS:MONTHS:DAYS): 0:11:19 Joshua and I are in the playroom. I build a few block towers for him to knock down, but he doesn't do so; rather, he dismantles them removing the blocks from the top, one at a time 1:0:16 Joshua and I are in the playroom . Joshua takes a toy cup, and pretends to drink out of it. 1:2 Joshua is sitting in the living room playing with a ball. I hold my hand out to him, and he gives me the ball. The above observations show that Joshua knew the properties and functions of objects like blocks, cups and balls. He knew actions that could be per- formed with them, and various spatial relations that they could enter into. Finally, he knew that behavior can be signaled through gestures by other people. Thus, a language learning program must be equipped with this kind of knowledge. Joshua's Comprehension Abilities at Age2 At age 2, Joshua could respond correctly to commands with unlikely meaning and structure. His correct responses suggests full understanding of them. For example, consider the following: 2:0:5 We walk into the living room and Joshua shows us his slippers. His mother says "Put your slippers on the piano." Joshua picks up the slippers and puts them on the piano keys, looking at his mother. She laughs and says "Thats silly." Joshua removes the slippers. The meaning of this utterance is unlikely since slippers do not generally go on piano keys, and piano keys don't generally have things put on them. His response suggests that he was guided by full understanding of the meanings of the words in "Put your slippers on the piano." At age 2 Joshua also understood language structure, as the following example shows: 2:o:o Joshua and I are in the p layroom, my tape recorder i on the floor in front of me. I say "Get on the tape recorder, Joshua". Joshua looks at me oddly, and looks at the the tape recorder. I repeat "Get on the tape recorder." Joshua moves next to the tape tape recorder. I once more repeat 'Get on the the tape recorder." Joshua watches me intently, and lifts his foot up and slowly moves it over the tape recorder to step on it. I laugh and pull the tape recorder away. It seems that Joshua understood "Get on the tape recorder" the first time I said it, and that his reluctance to comply reflected his knowledge that what I was asking was very unlikely. That is, Joshua understood that the tape recorder was the object to be underneath him, although this is un- likely given his experience with it. This, in turn, suggests that Joshua understood the struc- ture of the word "on", namely, that the word whose meaning is the supporting surface follows "on". Thus a program modelling Joshua at age 2 must understand utterances using language structure. Joshua's Language Experiences In the year between -- the ages of 1 and 2, Joshua experienced situations which allowed him to make inferences concerning the utterances he heard. In this section, three examples of such situations are given, and infer- ence rules accounting for Joshua's response and attention to words are presented. In the first example, I am using an utterance and simultaniously signalling the meaning of that utterance through gestures: 1:2:17 We are sitting in the living room, Joshua is holding a book. I look at Joshua, maintain eye contact for a moment, hold my hand out to him and say "Give me the book, Joshua." Joshua holds the book out to me. In this situation, Joshua probably inferred that the meaning of "Give me the book, Joshua." was the same as that signalled by the gestures. The fol- lowing rule captures this idea: Gestural Meaning Inference If an utterance is accompanied by gestures with associated meanings then infer that the the utterance means the same as the gestures. Knowledge of object function and properties helped Joshua infer responses in other situations. In the following, Joshua used his knowledge that books can be opened in his response: 1:0:9 Joshua has a book in his hand, and is looking at it, turning it over, and examining it. His mother says 'open the book, open the book..." Joshua opens the book. She says, "Good Joshua, good." A rule summarizing this inference is the follow- ing: Function/Property Inference If an utterance is heard while interacting with an object then the meaning of the utterance involves a function or property of that object. Parent speech to children posesses many attention-focussing characteristics (e.g. Newport, 1973). The following example is typical: 1:18:0 Joshua's father is trying to demonstrate that Joshua knows the names of the upstairs rooms, and has put a toy lawnmower in the bathroom. He says "Where is the lawnmower, Josh? Its in the BATHROOM. The LAWNMOWER is in the BATHROOM. BATHROOM!" Joshua's attention to "bathroom" in this example can be explained by the following rule: Attention Inference If a word is emphasised, repeated, or said in isolaytion, then attend to it. These are the kind of rules which I postulate enabled Joshua to infer the meaning of utterances from context, and attend to part of the utterance. The program must be equipped with such rules and must be given input in similar contexts. Learning Rules This section will consider Joshua's learning of action, object, and relation words, and language structure. It presents accounts of how Joshua might have learned each of these. Most of the rules have their roots in the learning strategies proposed by Bruner, Goodnow, and Austin ('956). One way Joshua learned the names of ob- jects is by having them named for him, as in the following example: 1:0:0 Joshua is crying. His mother picks him up and goes over to the refrigerator. She gets some juice, holds it up, and asks, "Do you want some JUICE?" Joshua keeps crying. She gets a banana and asks, "Do you want some BANANA, Joshua?" Joshua reaches for it. The following rule models Joshua's ability to learn by having objects named: Direct Naming Inference If a word and an object are both brought to attention, infer the word is the object's name. This rule, and other object word learning rules, can account for how Joshua learned object words such a "slippers", "piano", "ball", and "table". Action words can be learned via inferences about other known words in the utterance. In the following example, summarized from Schank and Sel- fridge (1977), Hana could have inferred the mean- ing of "put" based on her knowledge of the mean- ings of "finger" and "ear." (age 1) Hana knows the words "finger" and "ear", but not "put." She was asked to "Put your finger in your ear," and she did so. The following two rules can account for learning "Put" in situations like this. The first suggests that "put" would initially be learned as "put something in something else." The second, applied after the first in a slightly different situation, would refine the meaning of "put" to "put some- thing someplace". Response Completion Inference Infer the meaning of an unknown word to be the meaning of the entire utterance with the meanings of the known words factored out. Meaning Refinement Inference If part of the meaning of a word is not part of the meaning of an utterance it occurs in, remove that part from the word's meaning. Rules like the above can account for Joshua learning action words like "put", "bring", "give", and so on. However, they can also account for Joshua learning relation words, such as "on" and “in”. If Joshua knew "put", "ball", and "box", say, and was asked to "put the ball in the box", these rules would account for his learning that "in" referred to the "contained" relation. These, then, are the sort of rules the pro- gram uses to learn word meanings. The program's rule for learning language structure is more direct. It is based around the two structural predicates, PRECEDES and FOLLOWS, which relate the positions of words and concepts in short-term memory. This rule models Joshua's acquisition of structural information upon hearing utterances he understands, and appears below: Structure Learning Rule If a slot filler occurs preceding or fol lowing a word' s meaning then update the word's defini .tion that information. This rule accounts for Joshua learning that the filler of the VAL slot of "in"'s meaning -- (CONT VAL (NIL)) -- is found FOLLOWing "in" in the utterance. The Program This section presents four excerpts from a run of the program, written in LISP on a DECSYSTEM-20. Each represents the program at a different stage in development as it progresses from Joshua's age 1 abilities to Joshua's age 2 abilities, using the inference rules described previously. The knowledge representation used is Conceptual Dependancy (Schank, 19731, and the language understanding process embedded in the program is similar to that in Birnbaum and Sel- fridge (1979). The first stage of the program corresponds to Joshua at age 1. At this stage, the program had only the knowledge ascribed to Joshua at that age. In the excerpt below, the "parent" types a lower- case utterance to the program, and the program responds with a message stating its lack of under- standing. When the parent provides gestures via simulated visual input, however, the program understands, and prints the CD representation of its response. IPARENT SAYS: give me the ball ICHILD STARES BLANKLY AT PARENT ICHILD RETURNS TO PLAY ICHILD sms: (PARENT HOLDS OUT HAND) (PARENT ~0oKs AT BALLS) ICHILD INFERS RESPONSE USING RULE: I GESTURAL MEANING ICHILD RESPONDS: (ATRL~NS ACTOR (CHILD) I OBJECT (BALLS) TO (~0~s VAL (PARENT)) In the second stage, shown in the excerpt below, the program has learned the meaning of several words, and understands some utterances correctly. In this case, it has learned the words "put", "ball", and "box". However, notice that although it responds correctly to the first utter- ance given by the parent, it misunderstands the second. This sort of error is reported in Hoogen- raad et al. (1976). Not knowing "on", the program incorrectly infers that the appropropriate rela- tionship is containment. IPARENT SAYS: put the ball in the box I ICHILD 11wERs RESPONSE USING RULE: I UTTERANCE UNDERSTANDING, FUNCTION/PROPERTY ICHILD RESPONDS: (PTRANS ACTOR (CHILD) I OBJECT (BALLS) TO (CONT VAL (BOXl)) I PARENT SAYS: put the ball on the box I iCHILD INFERS RESPONSE USING RULE: I UTTERANCE UNDERSTANDING, FUNCTION/PROPERTY ICHILD RESPONDS: (PTRANS ACTOR (CHILD) OBJECT (BALLS) TO (CONT VAL &0X1)) The transition from the second stage to the third is accomplished by teaching the program more words. In this case it has learned the additional words "slippers", "on", "piano", “ball”, and “table. ” At this stage, the program can now understand "Put the slippers on the piano", whereas at any earlier stage it would not have. The program also prints out a message showing that it recognizes this as an unusual request. However, although this stage represents Joshua's age 2 understanding of word meaning, the program has not yet learned language structure. The program interprets the second utterance in- correctly, however, in accord with its knowledge of the usual relationships between objects. This sort of error is similar to that reported in Stroher and Nelson (1974). IPARENT SAYS: put the slippers on the piano ICHILD LOOKS AT PARENT STRANGELY I CHILD INFERS RESPONSE USING RULE: I UTTERANCE UNDERSTANDING ICHILD RESPONDS: I (PTRANS ACTOR (CHILD) I OBJECT (SLIPPERS~) TO (TOP VAL (~1~~01))) IPARENT SAYS: put the table on the ball I iCHILD INFERS RESPONSE USING RULE: I UTTERANCE UNDERSTANDING ICHILD RESPONDS: (PTRANS ACTOR (CHILD) I OBJECT (BALLS) TO (TOP VAL (TABLET))) The fourth stage is shown in the excerpt below. The program has now learned the structure of "on", and can hence correctly understand "Put the table on the ball." In addition, it prints out 226 a message indicating its liarity of this reque st. awareness of IPARENT SAYS: put the table on the ball ~CHILD LAUGHS AT UNUSU~~L REQUEST ICHILD INFERS RESPONSE USING RULE: I UTTERANCE UNDERSTANDING ICHILD RESPONDS: (PTRANS ACTOR (CHILD) I OBJECT (TABLET) TO (TOP VAL (BALLS))) the pecu- At the four th stage, the program has success- fully learned to understand a subset of language at Joshua's age 2 level. It began with world knowledge similar to that Joshua began with, was equipped with reasonable learning and inference rules, and progressed as he did by being given language experiences similar to those he experi- enced. Conclusions This paper has described a computer model of a child learning to understand commands involving action, object, and relation words. The program learns language meaning and structure to the level attained by the child at age 2, by being initially given the same kind of knowledge the child had and by being exposed to language in the same kind of contexts as the child did. gram learned language according to a The pro- reasonable progress ion, making the same sort of error s that children do a t intermediate stages. No par ts of speech or traditional grammatical constructions are learned. It also acquires structural knowledge after knowledge of meaning, because no structural knowledge can be-associated with a word until the meaning of that word is learned. This This aspect of the model offers an explanation for why children learn structure following meaning (Wetstone and Friedlander, 1973). In addition to English, the program has been tested on comparable subsets of Japanese, Russian, Chinese, Hebrew, and Spanish. Its performance with these languages was equivalent to its learning of English, suggesting that the program has no English-specific mechan- isms. This research suggests several conclusions. It suggests that a large part of the language learning problem lies in accounting for how the child infers the meaning of the language he hears. It argues that the mechanisms underlying the learning of meaning and structure are the same. It questions the role of traditional grammatical models both in language learning and language understanding, and suggests that models of language learning must be based on strong models of language understanding. In particular, it ques- tions Chomsky's (1980) position that language is not learned. This work suggests that plausible learning models of language development are possi- ble. Further research should proceed in many directions. In particular, the program discussed here should be extended to model the development of comprehension of more complex constructions, such as relative clauses, and the generation of language. Acknowledgements Dr. Roger Schank's assistance in this work was in- valuable. Peter Selfridge provided useful com- ments on this paper. Bibliography Birnbaum, L., and Selfridge, M. (1979). Prob- lems in Conceptual Analysis of Natural Language. Research Report 168, Department of Computer Sci- ence, Yale University. Bruner, J.S., Goodnow, J. J., and Austin, G.A., (1956). A Study of Thinking. John Wiley and Sons, New York - Chomsky, N., (1980). Rules and Representations, excerpted from Rules and Representations. Colum- -- bia University Press, New York Hoodenraad, R., Grieve, R., Baldwin, P., and Campbell, R. (1976). Comprehension as an Interac- tive Process. In R. N. Campell and P. T. Smith (eds.) Recent Advances in the Psychology of Language., Plenum Press, NeFYor - Newport, E.L., (1976). Motherese: the Speech of Mothers to Young Children. in N.J. Castellan, D.B. Pisoni, and G.R. Potts, (eds.) Cognitive Theory: VI II., Lawrence Erlbaum Assoc., Hilsdale, N.J. Schank, R. C., (1973). Identification of Con- ceptualizations Underlying Natural Language. In R. C. Schank and K. M. Colby (eds.) Computer Models of Thought and Language W.H. Freeman and Co., San Fransisco. Schank, R. C., and Selfridge, M. (1977). How to Learn/What to Learn. in Proceedings of the International Joint Conference on Artificial In- telligence, Cambridge, Mass. Selfridge, M. (1980). A Process Model of Language Acquisition. Computer Science Technical Report 172, Yale University, New Haven, Ct. Strohrer, H. and Nelson, K.E., (1974). The Young Child's Development of Sentence Comprehen- sion: Influence of Event Probability. Non-verbal . . Context, Syntactic Form, and Strategies. Child m., 45: 567-576 Westone, H. and Friedlander, (1973). The Ef- fect of Word Order on Young Children's Responses to Simple Questions and Commands. Child Dev 44:734-740 - -0 227
1980
18
10
APPROACHES TO KNOWLEDGE ACQUISITION: THE INSTRUCTABLE PROQUCTION SYSTEM PROJECT Michael D. Rychener Carnegie-Mellon University Department of Computer Science Schenley Park Pittsburgh, PA 15213 Abstract Progress in building systems that acquire knowledge from a variety of sources depends on determining certain functional requirements and ways for them to be met. Experiments have been performed with learning systems having a variety of functional components. The results of these experiments have brought to light deficiencies of various sorts, in systems with various degrees of effectiveness. The components considered here are: interaction language; organization of procedural elements; explanation of system behavior; accommodation to new knowledge; connection of goals with system capabilities; reformulation (mapping) of knowledge; evaluation of behavior; and compilation to achieve efficiency and automaticity. A number of approaches to knowledge acquisition tried within the Instructable Production System (IPS) Project are sketched.* 1. The lnst ructable Production System Project The IPS project [6] attempts to build a knowledge acquisition system tinder a number of constraints. The instructor of the system gains all information about IPS by observing its interactions with its environment (including the instructor). Interaction is to take place in (restricted) natural language. The interaction is mixed initiative, with both participants free to try to influence the direction. Instruction may be about any topic or phenomenon in the system’s external or internal environment. Knowledge accumulates over the lifetime of the system. *This research was sponsored by the Defense Advanced Research Projects Agency (DOD), ARPA Order No. 3597, monitored by the Air Force Avionics Laboratory Under Contract F33615-78-C-1551. The views and conclusions contained in this document ate those of the author and should not be interpreted as representing the official polictes, either expressed or implied, of the. Defense Advanced Research Projects Agency or the US Government. Throughout these IPS experiments, the underlying knowledge organization has been Production Systems (PSs) [2], a form of rule-based system in which learning is formulated as the addition to, and modification of, an unstructured collection of production rules. Behavior is obtained through a simple recognize-act cycle with a sophisticated set of principles for resolving conflicts among rules. The dynamic short-term memory of the system is the Working Memory (WM), whose contents are matched each cycle to the conditions of rules in the long-term memory, Production Memory. Study of seven major attempts to construct instructable PSs with various orientations leads to recognizing the centrality of eight functional components. Listing the components and their embodiment in various versions of IPS can contribute to research on learning systems in general, by clarifying some of the important subproblems. This discussion is the first overview of the work of the project to date, and indicates its evolutionary development. Members of the IPS project are no longer working together intensively to build an instructable F ‘S, but individual studies that will add to our knowledge about one or more of these components are continuing. Progress on the problem of efficiency of PSs has been important to the IPS project [3], but will not be discussed further here. 2. Essential Functional Components of Inst ructable Systems The components listed in this section are to be interpreted loosely as dimensions along which learning systems might vary. Interaction The content and form of communications -A between instructor and IPS can have a lot to do with ease and effectiveness of instruction. In particular, it is important to know how closely communications correspond to internal IPS structures. Similarly, we must ask how well the manifest behavior of IPS indicates its progress on a task. An IPS can have various orientations towards interactions, ranging from passive to active, with maintenance of consistency and assimilation into existing structures. Organization. Each version of IPS approaches the issue of obtaining coherent behavior by adopting some form of organization of its ‘procedural’ knowledge. This may involve such techniques as collecting sets of rules into ‘methods’ and using signal conventions for sequencing. Whether IPS can explain its 228 static organization and whether the instructor of procedural control are important subissues. can see the details Explanation. A key operation in an instructable system is that of explaining how the system has arrived at some behavior, be it correct or incorrect. In the case of wrong behavior, IPS must reveal enough of its processing to allow the more intelligent instructor to determine what knowledge IPS lacks. Accommodation. Once corrections to IPS’s knowledge have been formulated by the instructor, it remains for further interactions with IPS to augment or modify itself. In the IPS framework, these modifications are taken to be changes to the rules of the system, rather than changes to the less permanent WM. As with interaction, IPS can take a passive or active approach to this process. Connection. Manifest errors are not the only way a system indicates a need for instruction: inability to connect a current problem with existing knowledge that might help in solving it is perhaps a more fundamental one. An IPS needs ways to assimilate problems into an existing knowledge framework, and ways to recognize the applicability of, and discriminate among, existing methods. Reformulation. Another way that IPS can avoid requiring instruction is for it to reformulate existing knowledge to apply in new circumstances. There are two aspects to this function: finding knowledge that is potentially suitable for mapping, and performing the actual mapping. In contrast to connection, this component involves transformation of knowledge in rules, either permanently or dynamically. Evaluation. Since the instructor has limited access to what IPS is doing, it is important for IPS to be able to evaluate its own progress, recognizing deficiencies and errors ai they occur so that instruction can take place as closely as possible to the dynamic point of error. Defining what progress is and formulating relevant questions to ask to fill gaps in knowledge are two subissues. Compilation. Rules initially formed as a result of the instructor’s input may be amenable to refinements that improve IPS’s efficiency. This follows from several factors: during instruction, IPS may be engaged in search or other ‘interpretive’ execution&; instruction may provide IPS with fragments that can only be assembled into efficient form later; and IPS may form rules that are either too general or too specific. Improvement with practice is the psychological analog of this capability. Anderson et al [l] have formulated several approaches to compilation. 3. Survey of Approaches Kernell, ANA, Kernel2 and IPMSL have been fully implemented. The others were suspended at various earlier stages of development, for reasons that were rarely related to substantive or conceptual difficulties. Kernel Version 1 -ZL The starting point for IPS is the adoption of a pure means-ends strategy: given explicit goals, rules are the means to reducing or solving them. Four classes of rules are distinguished: means rules; recognizers of success; recognizers of failure; and evocation of goals from goal-free data. The Kernel1 [6] approach further organizes rules into methods, which group together (via patterns for the same goal) a number of means, tests and failure rules. Interaction consists of language strings that roughly correspond to these methods and to system goals (among which are queries). Keywords in the language give rise to the method sequencing tags and also serve to classify and bound rules. Explanation derives from the piecing together of various goals in WM, along with associated data. The major burden of putting together raw data that may be sufficient for explanation rests on the instructor, a serious weakness. Additive Successive Aouroximations [ASA). Some of the drawbacks of I<ernell can be remedied* by orienting instruction towards fragments of methods that can be more readily refined at later times. Interaction consists of having the instructor point at items in IPS’s environment (especially WM) in four ways: condition (for data to be tested), action (for appropriate operators), relevant (for essential data items), and entity (to create a symbol for a new knowledge expression). These designations result in methods that are very loose collections of rules, each of which contributes some small amount towards achievement of the goal. Accommodation is done as post-modification of an existing method in its dynamic execution context, through ten method-modification methods. Analoclv. A concerted attempt to deal with issues of connection and accommodation is represented by McDermott’s ANA program [4]. ANA starts out with the ability to solve a few very specific problems, and attacks subsequent similar problems by using the methods it has analogically. The starting methods are hand-coded. Connection of a new goal to an existing method takes place via special method description rules that are designed to respond to the full class of goals that appear possible for a method to deal with by analogy. An analogy is set up by finding paths through a semantic network containing known objects and actlons. As a method operating by analogy executes, rules recognize points where an analogy breaks down. Then general analogy methods are able either to patch the method directly with specific mappings or to query the instructor for new means-ends rules. Each attempt to build an IPS has been based on the idea of an initial hand-coded kernel system, with enough structure in it to support all further growth by instruction, A kernel establishes the internal representations and the overall approach to instruction. The following are presented in roughly chronological order. *These ideas were introduced by A. Newell in October, 1977. 229 Problem Soaces. Problem spaces [5]* provide a novel basis for IPS by embedding all behavior and interactions in search. A problem space consists of a collection of knowledge elements that compose states in a space, plus a collection of operators that produce new states from known ones. A problem consists of an initial state, a desired state, and possibly path constraints. Newell’s Problem Space Hypothesis (ibid.) claims that all goal-oriented cognitive activity occurs in a problem space, not just activity that is sufficiently problematical. Interaction consists of giving IPS problems and search control knowledge (hints as to how to search specific spaces). Every Kernel component must be a problem space too, and thus subject to the same modification processes. The concrete proposal as it now stands concentrates on interaction, explanation (which involves sources of knowledge about the present state of the search), and organization. Schemas. The use of schemas as a basis for an IPS kerner* make slot-filling the primary information-gathering operation. A slot is implemented as a set of rules. The slots are: executable method; test of completion; assimilation (connects present WM with the schema for a goal); initialization (gathers operands for a method); model (records the instruction episode for later reference); accommodation (records patches to the method); status (records gaps in the knowledge); monitoring (allows careful execution); and organization (records method structure). Orientation towards instruction is active, as in ASA. Explanation consists of interpreting the model slot, and accommodation, of fitting additions into the model. Connection is via a discrimination network composed of the aggregated assimilation slots of all schemas. Compilation is needed here, to map model to method. Kernel Version 2. An approach with basic ideas similar to ASA and to Waterman’s Exemplary Programming [8], Kernel2 [7] focusses on the process of IPS interacting with the instructor to build rules in a dynamic execution context. The instructor essentially steps through the process of achieving a goal, with IPS noting what is done and marking elements for inclusion in the rules to be built when the goal is achieved. Kernel2 includes a semantic network of information about its methods, for use as a ‘help’ facility. Kernel2 is the basis from which the IPMSL system, below, is built. Semantic Network. Viewing accumulation of knowledge as additions to a semantic network is the approach taken by the IPMSL system [7]. Interaction consists of definition and modification of nodes in a net, where such nodes are represented completely as rules. Display and net search facilities are provided as aids to explanation and accommodation. The availability of traditional semantic network inferences makes it possible for IPMSL to develop an approach to connection and reformulation, since they provide a set of tools for relating and mapping knowledge into more tractable expressions. *This 1978. approach was formulated by A. Newell and J. Laird in October of 4. Conclusions The IPS project has not yet succeeded in combining effective versions of components as discussed above, to produce an effective IPS. The components as presently understood and developed, in fact, probably fall short of complete adequacy for such a system. But we have explored and developed a number of approaches to instructability, an exploration that has added to the stock of techniques for exploiting the advantages of PSs. We are encouraged by the ability of the basic PS architecture to enable explorations in a variety of directions and to assume a variety of representations and organizations. Acknowledqments. Much of the work sketched has been done jointly over the course of several years. Other project members are (in approximate order of duration of commitment to it): Allen Newell, John McDermott, Charles L. Forgy, Kamesh Ramakrishna, Pat Langley, Paul Rosenbloom, and John Laird. Helpful comments on this paper were made by Allen Newell, Jaime Carbonell, David Neves and Robert Akscyn. References 1. Anderson, J. R., Kline, P. J., and Beasley, C. M. Jr. A Theory of the Acquisition of Cognitive Skills. Tech. Rept. 77-1, Yale University, Dept. of Psychology, January, 1978. 2. Forgy, C. L. OPS4 User’s Manual. Tech. Rept. CMU-CS-79-132, Carnegie-Mellon University, Dept. of Computer Science, July, 1979. 3. Forgy, C. L. On the Efficient Implementation of Production Systems. Ph.D. Th., Carnegie-Mellon University, Dept. of Computer Science, February 1979. 4. McDermott, J. ANA: An Assimilating and Accommodating Production System. Tech. Rept. CMU-CS-78-156, Carnegie-Mellon University, Dept. of Computer Science, December, 1978. Also appeared in IJCAI-79 5. Newell, A. Reasoning, problem solving and decision processes: the problem space as a fundamental category. In Attention and Performance VIII, Nickerson, R., Ed.,Erlbaum, Hillsdale, NJ, 1980. 6. Rychener, M. D. and Newell, A. An instructable production system: basic design issues. In Pattern-Directed Inference Systems, Waterman, D. A. and Hayes-Roth, F., Eds., Academic, New York, NY, 1978, pp. 135-153. 7. Rychener, M. D. A Semantic Network of Production Rules in a System for Describing Computer Structures. Tech. Rept. CMU-CS-79-130, Carnegie-Mellon University, Dept. of Computer Science, June, 1979. Also appeared in IJCAI-79 8. Waterman, D. A. Rule-Directed Interactive Transaction Agents: An Approach to Knowledge Acquisition. Tech. Rept. R-21 71 -ARPA, The Rand Corp., February, 1978. d*Schemas were first proposed for IPS by Hychener, May, 1978 230
1980
19

Dataset Card for "AI-paper-crawl"

The dataset contains 11 splits, corresponding to 11 conferences.

For each split, there are several fields:

  1. "index": Index number starting from 0. It's the primary key;
  2. "text": The content of the paper in pure text form. Newline is turned into 3 spaces if "-" is not detected;
  3. "year": A string of the paper's publication year, like "2018". Transform it into int if you need to;
  4. "No": A string of index number within a year. 1-indexed. In "ECCV" split, the "No" is index number throughout the entire split. It only provides a reference of the order that these papers are accessed, instead of the real publication order.

The "ICLR" split may miss roughly 20%-25% of the papers, since it's collected by searching on arxiv, which may return 0 or more than 1 results.

More Information needed

Downloads last month
46
Edit dataset card